00:00:00.000 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 1059 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3726 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.058 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.058 The recommended git tool is: git 00:00:00.059 using credential 00000000-0000-0000-0000-000000000002 00:00:00.060 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.088 Fetching changes from the remote Git repository 00:00:00.090 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.128 Using shallow fetch with depth 1 00:00:00.128 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.128 > git --version # timeout=10 00:00:00.167 > git --version # 'git version 2.39.2' 00:00:00.167 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.194 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.194 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.429 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.441 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.455 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.455 > git config core.sparsecheckout # timeout=10 00:00:04.468 > git read-tree -mu HEAD # timeout=10 00:00:04.483 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.497 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.498 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.579 [Pipeline] Start of Pipeline 00:00:04.592 [Pipeline] library 00:00:04.594 Loading library shm_lib@master 00:00:04.594 Library shm_lib@master is cached. Copying from home. 00:00:04.607 [Pipeline] node 00:00:04.617 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:04.618 [Pipeline] { 00:00:04.628 [Pipeline] catchError 00:00:04.629 [Pipeline] { 00:00:04.640 [Pipeline] wrap 00:00:04.649 [Pipeline] { 00:00:04.656 [Pipeline] stage 00:00:04.657 [Pipeline] { (Prologue) 00:00:04.671 [Pipeline] echo 00:00:04.672 Node: VM-host-SM9 00:00:04.676 [Pipeline] cleanWs 00:00:04.684 [WS-CLEANUP] Deleting project workspace... 00:00:04.684 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.690 [WS-CLEANUP] done 00:00:04.892 [Pipeline] setCustomBuildProperty 00:00:04.983 [Pipeline] httpRequest 00:00:05.372 [Pipeline] echo 00:00:05.374 Sorcerer 10.211.164.20 is alive 00:00:05.380 [Pipeline] retry 00:00:05.381 [Pipeline] { 00:00:05.392 [Pipeline] httpRequest 00:00:05.395 HttpMethod: GET 00:00:05.396 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.397 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.402 Response Code: HTTP/1.1 200 OK 00:00:05.402 Success: Status code 200 is in the accepted range: 200,404 00:00:05.403 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.391 [Pipeline] } 00:00:07.407 [Pipeline] // retry 00:00:07.413 [Pipeline] sh 00:00:07.693 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.707 [Pipeline] httpRequest 00:00:08.020 [Pipeline] echo 00:00:08.022 Sorcerer 10.211.164.20 is alive 00:00:08.031 [Pipeline] retry 00:00:08.033 [Pipeline] { 00:00:08.047 [Pipeline] httpRequest 00:00:08.051 HttpMethod: GET 00:00:08.052 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:08.053 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:08.054 Response Code: HTTP/1.1 200 OK 00:00:08.055 Success: Status code 200 is in the accepted range: 200,404 00:00:08.055 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:29.085 [Pipeline] } 00:00:29.103 [Pipeline] // retry 00:00:29.111 [Pipeline] sh 00:00:29.392 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:31.943 [Pipeline] sh 00:00:32.223 + git -C spdk log --oneline -n5 00:00:32.223 c13c99a5e test: Various fixes for Fedora40 00:00:32.223 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:32.223 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:32.223 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:32.223 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:32.241 [Pipeline] withCredentials 00:00:32.251 > git --version # timeout=10 00:00:32.265 > git --version # 'git version 2.39.2' 00:00:32.281 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:32.283 [Pipeline] { 00:00:32.293 [Pipeline] retry 00:00:32.295 [Pipeline] { 00:00:32.309 [Pipeline] sh 00:00:32.591 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:32.602 [Pipeline] } 00:00:32.621 [Pipeline] // retry 00:00:32.626 [Pipeline] } 00:00:32.642 [Pipeline] // withCredentials 00:00:32.651 [Pipeline] httpRequest 00:00:33.095 [Pipeline] echo 00:00:33.097 Sorcerer 10.211.164.20 is alive 00:00:33.107 [Pipeline] retry 00:00:33.109 [Pipeline] { 00:00:33.124 [Pipeline] httpRequest 00:00:33.128 HttpMethod: GET 00:00:33.129 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:33.129 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:33.141 Response Code: HTTP/1.1 200 OK 00:00:33.142 Success: Status code 200 is in the accepted range: 200,404 00:00:33.142 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:02.045 [Pipeline] } 00:01:02.063 [Pipeline] // retry 00:01:02.071 [Pipeline] sh 00:01:02.351 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:03.736 [Pipeline] sh 00:01:04.013 + git -C dpdk log --oneline -n5 00:01:04.013 caf0f5d395 version: 22.11.4 00:01:04.013 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:04.013 dc9c799c7d vhost: fix missing spinlock unlock 00:01:04.013 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:04.013 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:04.028 [Pipeline] writeFile 00:01:04.041 [Pipeline] sh 00:01:04.321 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:04.333 [Pipeline] sh 00:01:04.614 + cat autorun-spdk.conf 00:01:04.614 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.614 SPDK_TEST_NVMF=1 00:01:04.614 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.614 SPDK_TEST_URING=1 00:01:04.614 SPDK_TEST_USDT=1 00:01:04.614 SPDK_RUN_UBSAN=1 00:01:04.614 NET_TYPE=virt 00:01:04.614 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:04.614 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:04.614 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:04.621 RUN_NIGHTLY=1 00:01:04.623 [Pipeline] } 00:01:04.636 [Pipeline] // stage 00:01:04.650 [Pipeline] stage 00:01:04.652 [Pipeline] { (Run VM) 00:01:04.665 [Pipeline] sh 00:01:04.947 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:04.947 + echo 'Start stage prepare_nvme.sh' 00:01:04.947 Start stage prepare_nvme.sh 00:01:04.947 + [[ -n 0 ]] 00:01:04.947 + disk_prefix=ex0 00:01:04.947 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:04.947 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:04.947 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:04.947 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.947 ++ SPDK_TEST_NVMF=1 00:01:04.947 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.947 ++ SPDK_TEST_URING=1 00:01:04.947 ++ SPDK_TEST_USDT=1 00:01:04.947 ++ SPDK_RUN_UBSAN=1 00:01:04.947 ++ NET_TYPE=virt 00:01:04.947 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:04.947 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:04.947 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:04.947 ++ RUN_NIGHTLY=1 00:01:04.947 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:04.947 + nvme_files=() 00:01:04.947 + declare -A nvme_files 00:01:04.947 + backend_dir=/var/lib/libvirt/images/backends 00:01:04.947 + nvme_files['nvme.img']=5G 00:01:04.947 + nvme_files['nvme-cmb.img']=5G 00:01:04.947 + nvme_files['nvme-multi0.img']=4G 00:01:04.947 + nvme_files['nvme-multi1.img']=4G 00:01:04.947 + nvme_files['nvme-multi2.img']=4G 00:01:04.947 + nvme_files['nvme-openstack.img']=8G 00:01:04.947 + nvme_files['nvme-zns.img']=5G 00:01:04.947 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:04.947 + (( SPDK_TEST_FTL == 1 )) 00:01:04.947 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:04.947 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:04.947 + for nvme in "${!nvme_files[@]}" 00:01:04.947 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:04.947 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:04.947 + for nvme in "${!nvme_files[@]}" 00:01:04.947 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:04.947 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:04.947 + for nvme in "${!nvme_files[@]}" 00:01:04.947 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:04.947 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:04.947 + for nvme in "${!nvme_files[@]}" 00:01:04.947 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:04.947 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:04.947 + for nvme in "${!nvme_files[@]}" 00:01:04.947 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:05.206 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:05.206 + for nvme in "${!nvme_files[@]}" 00:01:05.206 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:05.206 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:05.207 + for nvme in "${!nvme_files[@]}" 00:01:05.207 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:05.466 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:05.466 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:05.466 + echo 'End stage prepare_nvme.sh' 00:01:05.466 End stage prepare_nvme.sh 00:01:05.477 [Pipeline] sh 00:01:05.755 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:05.756 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:01:05.756 00:01:05.756 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:05.756 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:05.756 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:05.756 HELP=0 00:01:05.756 DRY_RUN=0 00:01:05.756 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:01:05.756 NVME_DISKS_TYPE=nvme,nvme, 00:01:05.756 NVME_AUTO_CREATE=0 00:01:05.756 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:01:05.756 NVME_CMB=,, 00:01:05.756 NVME_PMR=,, 00:01:05.756 NVME_ZNS=,, 00:01:05.756 NVME_MS=,, 00:01:05.756 NVME_FDP=,, 00:01:05.756 SPDK_VAGRANT_DISTRO=fedora39 00:01:05.756 SPDK_VAGRANT_VMCPU=10 00:01:05.756 SPDK_VAGRANT_VMRAM=12288 00:01:05.756 SPDK_VAGRANT_PROVIDER=libvirt 00:01:05.756 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:05.756 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:05.756 SPDK_OPENSTACK_NETWORK=0 00:01:05.756 VAGRANT_PACKAGE_BOX=0 00:01:05.756 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:05.756 FORCE_DISTRO=true 00:01:05.756 VAGRANT_BOX_VERSION= 00:01:05.756 EXTRA_VAGRANTFILES= 00:01:05.756 NIC_MODEL=e1000 00:01:05.756 00:01:05.756 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:05.756 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:08.291 Bringing machine 'default' up with 'libvirt' provider... 00:01:08.859 ==> default: Creating image (snapshot of base box volume). 00:01:09.120 ==> default: Creating domain with the following settings... 00:01:09.120 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734241290_2cf62af9dfdd7326f81d 00:01:09.120 ==> default: -- Domain type: kvm 00:01:09.120 ==> default: -- Cpus: 10 00:01:09.120 ==> default: -- Feature: acpi 00:01:09.120 ==> default: -- Feature: apic 00:01:09.120 ==> default: -- Feature: pae 00:01:09.120 ==> default: -- Memory: 12288M 00:01:09.120 ==> default: -- Memory Backing: hugepages: 00:01:09.120 ==> default: -- Management MAC: 00:01:09.120 ==> default: -- Loader: 00:01:09.120 ==> default: -- Nvram: 00:01:09.120 ==> default: -- Base box: spdk/fedora39 00:01:09.120 ==> default: -- Storage pool: default 00:01:09.120 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734241290_2cf62af9dfdd7326f81d.img (20G) 00:01:09.120 ==> default: -- Volume Cache: default 00:01:09.120 ==> default: -- Kernel: 00:01:09.120 ==> default: -- Initrd: 00:01:09.120 ==> default: -- Graphics Type: vnc 00:01:09.120 ==> default: -- Graphics Port: -1 00:01:09.120 ==> default: -- Graphics IP: 127.0.0.1 00:01:09.120 ==> default: -- Graphics Password: Not defined 00:01:09.120 ==> default: -- Video Type: cirrus 00:01:09.120 ==> default: -- Video VRAM: 9216 00:01:09.120 ==> default: -- Sound Type: 00:01:09.120 ==> default: -- Keymap: en-us 00:01:09.120 ==> default: -- TPM Path: 00:01:09.120 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:09.120 ==> default: -- Command line args: 00:01:09.120 ==> default: -> value=-device, 00:01:09.120 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:09.120 ==> default: -> value=-drive, 00:01:09.120 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:01:09.120 ==> default: -> value=-device, 00:01:09.120 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:09.120 ==> default: -> value=-device, 00:01:09.120 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:09.120 ==> default: -> value=-drive, 00:01:09.120 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:09.120 ==> default: -> value=-device, 00:01:09.120 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:09.120 ==> default: -> value=-drive, 00:01:09.120 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:09.120 ==> default: -> value=-device, 00:01:09.120 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:09.120 ==> default: -> value=-drive, 00:01:09.120 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:09.120 ==> default: -> value=-device, 00:01:09.120 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:09.120 ==> default: Creating shared folders metadata... 00:01:09.380 ==> default: Starting domain. 00:01:10.758 ==> default: Waiting for domain to get an IP address... 00:01:28.875 ==> default: Waiting for SSH to become available... 00:01:28.875 ==> default: Configuring and enabling network interfaces... 00:01:31.410 default: SSH address: 192.168.121.235:22 00:01:31.410 default: SSH username: vagrant 00:01:31.410 default: SSH auth method: private key 00:01:33.313 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:41.431 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:46.699 ==> default: Mounting SSHFS shared folder... 00:01:48.084 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:48.084 ==> default: Checking Mount.. 00:01:49.501 ==> default: Folder Successfully Mounted! 00:01:49.501 ==> default: Running provisioner: file... 00:01:50.069 default: ~/.gitconfig => .gitconfig 00:01:50.637 00:01:50.637 SUCCESS! 00:01:50.637 00:01:50.637 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:50.637 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:50.637 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:50.637 00:01:50.645 [Pipeline] } 00:01:50.659 [Pipeline] // stage 00:01:50.668 [Pipeline] dir 00:01:50.668 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:01:50.670 [Pipeline] { 00:01:50.682 [Pipeline] catchError 00:01:50.683 [Pipeline] { 00:01:50.696 [Pipeline] sh 00:01:50.973 + vagrant ssh-config --host vagrant 00:01:50.973 + sed -ne /^Host/,$p 00:01:50.973 + tee ssh_conf 00:01:54.262 Host vagrant 00:01:54.262 HostName 192.168.121.235 00:01:54.262 User vagrant 00:01:54.262 Port 22 00:01:54.262 UserKnownHostsFile /dev/null 00:01:54.262 StrictHostKeyChecking no 00:01:54.262 PasswordAuthentication no 00:01:54.262 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:54.262 IdentitiesOnly yes 00:01:54.262 LogLevel FATAL 00:01:54.262 ForwardAgent yes 00:01:54.262 ForwardX11 yes 00:01:54.262 00:01:54.276 [Pipeline] withEnv 00:01:54.278 [Pipeline] { 00:01:54.291 [Pipeline] sh 00:01:54.569 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:54.569 source /etc/os-release 00:01:54.569 [[ -e /image.version ]] && img=$(< /image.version) 00:01:54.569 # Minimal, systemd-like check. 00:01:54.569 if [[ -e /.dockerenv ]]; then 00:01:54.569 # Clear garbage from the node's name: 00:01:54.569 # agt-er_autotest_547-896 -> autotest_547-896 00:01:54.569 # $HOSTNAME is the actual container id 00:01:54.569 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:54.570 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:54.570 # We can assume this is a mount from a host where container is running, 00:01:54.570 # so fetch its hostname to easily identify the target swarm worker. 00:01:54.570 container="$(< /etc/hostname) ($agent)" 00:01:54.570 else 00:01:54.570 # Fallback 00:01:54.570 container=$agent 00:01:54.570 fi 00:01:54.570 fi 00:01:54.570 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:54.570 00:01:54.581 [Pipeline] } 00:01:54.596 [Pipeline] // withEnv 00:01:54.604 [Pipeline] setCustomBuildProperty 00:01:54.619 [Pipeline] stage 00:01:54.621 [Pipeline] { (Tests) 00:01:54.637 [Pipeline] sh 00:01:54.917 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:55.189 [Pipeline] sh 00:01:55.469 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:55.484 [Pipeline] timeout 00:01:55.484 Timeout set to expire in 1 hr 0 min 00:01:55.486 [Pipeline] { 00:01:55.502 [Pipeline] sh 00:01:55.782 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:56.348 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:01:56.360 [Pipeline] sh 00:01:56.639 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:56.912 [Pipeline] sh 00:01:57.191 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:57.207 [Pipeline] sh 00:01:57.486 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:57.486 ++ readlink -f spdk_repo 00:01:57.486 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:57.486 + [[ -n /home/vagrant/spdk_repo ]] 00:01:57.486 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:57.486 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:57.486 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:57.486 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:57.486 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:57.486 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:57.486 + cd /home/vagrant/spdk_repo 00:01:57.486 + source /etc/os-release 00:01:57.486 ++ NAME='Fedora Linux' 00:01:57.486 ++ VERSION='39 (Cloud Edition)' 00:01:57.486 ++ ID=fedora 00:01:57.486 ++ VERSION_ID=39 00:01:57.486 ++ VERSION_CODENAME= 00:01:57.486 ++ PLATFORM_ID=platform:f39 00:01:57.486 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:57.486 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:57.486 ++ LOGO=fedora-logo-icon 00:01:57.486 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:57.486 ++ HOME_URL=https://fedoraproject.org/ 00:01:57.486 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:57.486 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:57.486 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:57.486 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:57.486 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:57.486 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:57.486 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:57.486 ++ SUPPORT_END=2024-11-12 00:01:57.486 ++ VARIANT='Cloud Edition' 00:01:57.486 ++ VARIANT_ID=cloud 00:01:57.486 + uname -a 00:01:57.748 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:57.748 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:57.748 Hugepages 00:01:57.748 node hugesize free / total 00:01:57.748 node0 1048576kB 0 / 0 00:01:57.748 node0 2048kB 0 / 0 00:01:57.748 00:01:57.748 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:57.748 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:57.748 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:57.748 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:57.748 + rm -f /tmp/spdk-ld-path 00:01:57.748 + source autorun-spdk.conf 00:01:57.748 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.748 ++ SPDK_TEST_NVMF=1 00:01:57.748 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.748 ++ SPDK_TEST_URING=1 00:01:57.748 ++ SPDK_TEST_USDT=1 00:01:57.748 ++ SPDK_RUN_UBSAN=1 00:01:57.748 ++ NET_TYPE=virt 00:01:57.748 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:57.748 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:57.748 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:57.748 ++ RUN_NIGHTLY=1 00:01:57.748 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:57.748 + [[ -n '' ]] 00:01:57.748 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:57.748 + for M in /var/spdk/build-*-manifest.txt 00:01:57.748 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:57.748 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:57.748 + for M in /var/spdk/build-*-manifest.txt 00:01:57.748 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:57.748 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:58.011 + for M in /var/spdk/build-*-manifest.txt 00:01:58.011 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:58.011 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:58.011 ++ uname 00:01:58.011 + [[ Linux == \L\i\n\u\x ]] 00:01:58.011 + sudo dmesg -T 00:01:58.011 + sudo dmesg --clear 00:01:58.011 + dmesg_pid=5973 00:01:58.011 + sudo dmesg -Tw 00:01:58.011 + [[ Fedora Linux == FreeBSD ]] 00:01:58.011 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:58.011 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:58.011 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:58.011 + [[ -x /usr/src/fio-static/fio ]] 00:01:58.011 + export FIO_BIN=/usr/src/fio-static/fio 00:01:58.011 + FIO_BIN=/usr/src/fio-static/fio 00:01:58.011 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:58.011 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:58.011 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:58.011 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:58.011 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:58.011 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:58.011 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:58.011 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:58.011 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:58.011 Test configuration: 00:01:58.011 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:58.011 SPDK_TEST_NVMF=1 00:01:58.011 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:58.011 SPDK_TEST_URING=1 00:01:58.011 SPDK_TEST_USDT=1 00:01:58.011 SPDK_RUN_UBSAN=1 00:01:58.011 NET_TYPE=virt 00:01:58.011 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:58.011 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:58.011 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:58.011 RUN_NIGHTLY=1 05:42:19 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:01:58.011 05:42:19 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:58.011 05:42:19 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:58.011 05:42:19 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:58.011 05:42:19 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:58.011 05:42:19 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.011 05:42:19 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.011 05:42:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.011 05:42:19 -- paths/export.sh@5 -- $ export PATH 00:01:58.011 05:42:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.011 05:42:19 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:58.011 05:42:19 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:58.011 05:42:19 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734241339.XXXXXX 00:01:58.011 05:42:19 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734241339.daf7S7 00:01:58.011 05:42:19 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:58.011 05:42:19 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:01:58.011 05:42:19 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:58.011 05:42:19 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:01:58.011 05:42:19 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:58.011 05:42:19 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:58.011 05:42:19 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:58.011 05:42:19 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:01:58.011 05:42:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.011 05:42:19 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:01:58.011 05:42:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:58.011 05:42:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:58.011 05:42:19 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:58.011 05:42:19 -- spdk/autobuild.sh@16 -- $ date -u 00:01:58.011 Sun Dec 15 05:42:19 AM UTC 2024 00:01:58.011 05:42:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:58.011 LTS-67-gc13c99a5e 00:01:58.011 05:42:19 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:58.011 05:42:19 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:58.011 05:42:19 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:58.011 05:42:19 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:58.011 05:42:19 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:58.011 05:42:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.011 ************************************ 00:01:58.011 START TEST ubsan 00:01:58.011 ************************************ 00:01:58.011 using ubsan 00:01:58.011 05:42:19 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:01:58.011 00:01:58.011 real 0m0.000s 00:01:58.011 user 0m0.000s 00:01:58.011 sys 0m0.000s 00:01:58.011 05:42:19 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:58.011 05:42:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.011 ************************************ 00:01:58.011 END TEST ubsan 00:01:58.011 ************************************ 00:01:58.270 05:42:19 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:58.270 05:42:19 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:58.270 05:42:19 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:58.270 05:42:19 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:01:58.270 05:42:19 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:58.270 05:42:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.270 ************************************ 00:01:58.270 START TEST build_native_dpdk 00:01:58.270 ************************************ 00:01:58.270 05:42:19 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:01:58.270 05:42:19 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:58.270 05:42:19 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:58.270 05:42:19 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:58.270 05:42:19 -- common/autobuild_common.sh@51 -- $ local compiler 00:01:58.270 05:42:19 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:58.270 05:42:19 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:58.270 05:42:19 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:58.270 05:42:19 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:58.270 05:42:19 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:58.270 05:42:19 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:58.270 05:42:19 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:58.271 05:42:19 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:58.271 05:42:19 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:58.271 05:42:19 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:58.271 05:42:19 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:01:58.271 05:42:19 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:58.271 05:42:19 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:01:58.271 05:42:19 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:01:58.271 05:42:19 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:01:58.271 05:42:19 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:01:58.271 caf0f5d395 version: 22.11.4 00:01:58.271 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:58.271 dc9c799c7d vhost: fix missing spinlock unlock 00:01:58.271 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:58.271 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:58.271 05:42:19 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:58.271 05:42:19 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:58.271 05:42:19 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:58.271 05:42:19 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:58.271 05:42:19 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:58.271 05:42:19 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:58.271 05:42:19 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:58.271 05:42:19 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:58.271 05:42:19 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:58.271 05:42:19 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:58.271 05:42:19 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:58.271 05:42:19 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:58.271 05:42:19 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:58.271 05:42:19 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:58.271 05:42:19 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:01:58.271 05:42:19 -- common/autobuild_common.sh@168 -- $ uname -s 00:01:58.271 05:42:19 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:58.271 05:42:19 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:58.271 05:42:19 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:58.271 05:42:19 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:58.271 05:42:19 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:58.271 05:42:19 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:58.271 05:42:19 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:58.271 05:42:19 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:58.271 05:42:19 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:58.271 05:42:19 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:58.271 05:42:19 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:58.271 05:42:19 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:58.271 05:42:19 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:58.271 05:42:19 -- scripts/common.sh@343 -- $ case "$op" in 00:01:58.271 05:42:19 -- scripts/common.sh@344 -- $ : 1 00:01:58.271 05:42:19 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:58.271 05:42:19 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:58.271 05:42:19 -- scripts/common.sh@364 -- $ decimal 22 00:01:58.271 05:42:19 -- scripts/common.sh@352 -- $ local d=22 00:01:58.271 05:42:19 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:58.271 05:42:19 -- scripts/common.sh@354 -- $ echo 22 00:01:58.271 05:42:19 -- scripts/common.sh@364 -- $ ver1[v]=22 00:01:58.271 05:42:19 -- scripts/common.sh@365 -- $ decimal 21 00:01:58.271 05:42:19 -- scripts/common.sh@352 -- $ local d=21 00:01:58.271 05:42:19 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:58.271 05:42:19 -- scripts/common.sh@354 -- $ echo 21 00:01:58.271 05:42:19 -- scripts/common.sh@365 -- $ ver2[v]=21 00:01:58.271 05:42:19 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:58.271 05:42:19 -- scripts/common.sh@366 -- $ return 1 00:01:58.271 05:42:19 -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:58.271 patching file config/rte_config.h 00:01:58.271 Hunk #1 succeeded at 60 (offset 1 line). 00:01:58.271 05:42:19 -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:01:58.271 05:42:19 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:58.271 05:42:19 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:58.271 05:42:19 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:58.271 05:42:19 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:58.271 05:42:19 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:58.271 05:42:19 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:58.271 05:42:19 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:58.271 05:42:19 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:58.271 05:42:19 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:58.271 05:42:19 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:58.271 05:42:19 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:58.271 05:42:19 -- scripts/common.sh@343 -- $ case "$op" in 00:01:58.271 05:42:19 -- scripts/common.sh@344 -- $ : 1 00:01:58.271 05:42:19 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:58.271 05:42:19 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:58.271 05:42:19 -- scripts/common.sh@364 -- $ decimal 22 00:01:58.271 05:42:19 -- scripts/common.sh@352 -- $ local d=22 00:01:58.271 05:42:19 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:58.271 05:42:19 -- scripts/common.sh@354 -- $ echo 22 00:01:58.271 05:42:19 -- scripts/common.sh@364 -- $ ver1[v]=22 00:01:58.271 05:42:19 -- scripts/common.sh@365 -- $ decimal 24 00:01:58.271 05:42:19 -- scripts/common.sh@352 -- $ local d=24 00:01:58.271 05:42:19 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:58.271 05:42:19 -- scripts/common.sh@354 -- $ echo 24 00:01:58.271 05:42:19 -- scripts/common.sh@365 -- $ ver2[v]=24 00:01:58.271 05:42:19 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:58.271 05:42:19 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:01:58.271 05:42:19 -- scripts/common.sh@367 -- $ return 0 00:01:58.271 05:42:19 -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:58.271 patching file lib/pcapng/rte_pcapng.c 00:01:58.271 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:58.271 05:42:19 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:58.271 05:42:19 -- common/autobuild_common.sh@181 -- $ uname -s 00:01:58.271 05:42:19 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:58.271 05:42:19 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:58.271 05:42:19 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:03.540 The Meson build system 00:02:03.540 Version: 1.5.0 00:02:03.540 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:03.541 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:03.541 Build type: native build 00:02:03.541 Program cat found: YES (/usr/bin/cat) 00:02:03.541 Project name: DPDK 00:02:03.541 Project version: 22.11.4 00:02:03.541 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:03.541 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:03.541 Host machine cpu family: x86_64 00:02:03.541 Host machine cpu: x86_64 00:02:03.541 Message: ## Building in Developer Mode ## 00:02:03.541 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:03.541 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:03.541 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:03.541 Program objdump found: YES (/usr/bin/objdump) 00:02:03.541 Program python3 found: YES (/usr/bin/python3) 00:02:03.541 Program cat found: YES (/usr/bin/cat) 00:02:03.541 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:03.541 Checking for size of "void *" : 8 00:02:03.541 Checking for size of "void *" : 8 (cached) 00:02:03.541 Library m found: YES 00:02:03.541 Library numa found: YES 00:02:03.541 Has header "numaif.h" : YES 00:02:03.541 Library fdt found: NO 00:02:03.541 Library execinfo found: NO 00:02:03.541 Has header "execinfo.h" : YES 00:02:03.541 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:03.541 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:03.541 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:03.541 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:03.541 Run-time dependency openssl found: YES 3.1.1 00:02:03.541 Run-time dependency libpcap found: YES 1.10.4 00:02:03.541 Has header "pcap.h" with dependency libpcap: YES 00:02:03.541 Compiler for C supports arguments -Wcast-qual: YES 00:02:03.541 Compiler for C supports arguments -Wdeprecated: YES 00:02:03.541 Compiler for C supports arguments -Wformat: YES 00:02:03.541 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:03.541 Compiler for C supports arguments -Wformat-security: NO 00:02:03.541 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:03.541 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:03.541 Compiler for C supports arguments -Wnested-externs: YES 00:02:03.541 Compiler for C supports arguments -Wold-style-definition: YES 00:02:03.541 Compiler for C supports arguments -Wpointer-arith: YES 00:02:03.541 Compiler for C supports arguments -Wsign-compare: YES 00:02:03.541 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:03.541 Compiler for C supports arguments -Wundef: YES 00:02:03.541 Compiler for C supports arguments -Wwrite-strings: YES 00:02:03.541 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:03.541 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:03.541 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:03.541 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:03.541 Compiler for C supports arguments -mavx512f: YES 00:02:03.541 Checking if "AVX512 checking" compiles: YES 00:02:03.541 Fetching value of define "__SSE4_2__" : 1 00:02:03.541 Fetching value of define "__AES__" : 1 00:02:03.541 Fetching value of define "__AVX__" : 1 00:02:03.541 Fetching value of define "__AVX2__" : 1 00:02:03.541 Fetching value of define "__AVX512BW__" : (undefined) 00:02:03.541 Fetching value of define "__AVX512CD__" : (undefined) 00:02:03.541 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:03.541 Fetching value of define "__AVX512F__" : (undefined) 00:02:03.541 Fetching value of define "__AVX512VL__" : (undefined) 00:02:03.541 Fetching value of define "__PCLMUL__" : 1 00:02:03.541 Fetching value of define "__RDRND__" : 1 00:02:03.541 Fetching value of define "__RDSEED__" : 1 00:02:03.541 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:03.541 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:03.541 Message: lib/kvargs: Defining dependency "kvargs" 00:02:03.541 Message: lib/telemetry: Defining dependency "telemetry" 00:02:03.541 Checking for function "getentropy" : YES 00:02:03.541 Message: lib/eal: Defining dependency "eal" 00:02:03.541 Message: lib/ring: Defining dependency "ring" 00:02:03.541 Message: lib/rcu: Defining dependency "rcu" 00:02:03.541 Message: lib/mempool: Defining dependency "mempool" 00:02:03.541 Message: lib/mbuf: Defining dependency "mbuf" 00:02:03.541 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:03.541 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:03.541 Compiler for C supports arguments -mpclmul: YES 00:02:03.541 Compiler for C supports arguments -maes: YES 00:02:03.541 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:03.541 Compiler for C supports arguments -mavx512bw: YES 00:02:03.541 Compiler for C supports arguments -mavx512dq: YES 00:02:03.541 Compiler for C supports arguments -mavx512vl: YES 00:02:03.541 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:03.541 Compiler for C supports arguments -mavx2: YES 00:02:03.541 Compiler for C supports arguments -mavx: YES 00:02:03.541 Message: lib/net: Defining dependency "net" 00:02:03.541 Message: lib/meter: Defining dependency "meter" 00:02:03.541 Message: lib/ethdev: Defining dependency "ethdev" 00:02:03.541 Message: lib/pci: Defining dependency "pci" 00:02:03.541 Message: lib/cmdline: Defining dependency "cmdline" 00:02:03.541 Message: lib/metrics: Defining dependency "metrics" 00:02:03.541 Message: lib/hash: Defining dependency "hash" 00:02:03.541 Message: lib/timer: Defining dependency "timer" 00:02:03.541 Fetching value of define "__AVX2__" : 1 (cached) 00:02:03.541 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:03.541 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:03.541 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:03.541 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:03.541 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:03.541 Message: lib/acl: Defining dependency "acl" 00:02:03.541 Message: lib/bbdev: Defining dependency "bbdev" 00:02:03.541 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:03.541 Run-time dependency libelf found: YES 0.191 00:02:03.541 Message: lib/bpf: Defining dependency "bpf" 00:02:03.541 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:03.541 Message: lib/compressdev: Defining dependency "compressdev" 00:02:03.541 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:03.541 Message: lib/distributor: Defining dependency "distributor" 00:02:03.541 Message: lib/efd: Defining dependency "efd" 00:02:03.541 Message: lib/eventdev: Defining dependency "eventdev" 00:02:03.541 Message: lib/gpudev: Defining dependency "gpudev" 00:02:03.541 Message: lib/gro: Defining dependency "gro" 00:02:03.541 Message: lib/gso: Defining dependency "gso" 00:02:03.541 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:03.541 Message: lib/jobstats: Defining dependency "jobstats" 00:02:03.541 Message: lib/latencystats: Defining dependency "latencystats" 00:02:03.541 Message: lib/lpm: Defining dependency "lpm" 00:02:03.541 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:03.541 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:03.541 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:03.541 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:03.541 Message: lib/member: Defining dependency "member" 00:02:03.541 Message: lib/pcapng: Defining dependency "pcapng" 00:02:03.541 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:03.541 Message: lib/power: Defining dependency "power" 00:02:03.541 Message: lib/rawdev: Defining dependency "rawdev" 00:02:03.541 Message: lib/regexdev: Defining dependency "regexdev" 00:02:03.541 Message: lib/dmadev: Defining dependency "dmadev" 00:02:03.541 Message: lib/rib: Defining dependency "rib" 00:02:03.541 Message: lib/reorder: Defining dependency "reorder" 00:02:03.541 Message: lib/sched: Defining dependency "sched" 00:02:03.541 Message: lib/security: Defining dependency "security" 00:02:03.541 Message: lib/stack: Defining dependency "stack" 00:02:03.541 Has header "linux/userfaultfd.h" : YES 00:02:03.541 Message: lib/vhost: Defining dependency "vhost" 00:02:03.541 Message: lib/ipsec: Defining dependency "ipsec" 00:02:03.541 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:03.541 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:03.541 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:03.541 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:03.541 Message: lib/fib: Defining dependency "fib" 00:02:03.541 Message: lib/port: Defining dependency "port" 00:02:03.541 Message: lib/pdump: Defining dependency "pdump" 00:02:03.541 Message: lib/table: Defining dependency "table" 00:02:03.541 Message: lib/pipeline: Defining dependency "pipeline" 00:02:03.541 Message: lib/graph: Defining dependency "graph" 00:02:03.541 Message: lib/node: Defining dependency "node" 00:02:03.541 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:03.541 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:03.541 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:03.541 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:03.541 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:03.541 Compiler for C supports arguments -Wno-unused-value: YES 00:02:03.541 Compiler for C supports arguments -Wno-format: YES 00:02:03.541 Compiler for C supports arguments -Wno-format-security: YES 00:02:03.541 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:05.444 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:05.444 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:05.444 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:05.444 Fetching value of define "__AVX2__" : 1 (cached) 00:02:05.444 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:05.444 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:05.444 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:05.444 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:05.444 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:05.444 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:05.444 Configuring doxy-api.conf using configuration 00:02:05.444 Program sphinx-build found: NO 00:02:05.444 Configuring rte_build_config.h using configuration 00:02:05.444 Message: 00:02:05.444 ================= 00:02:05.444 Applications Enabled 00:02:05.444 ================= 00:02:05.444 00:02:05.444 apps: 00:02:05.444 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:05.444 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:05.444 test-security-perf, 00:02:05.444 00:02:05.444 Message: 00:02:05.444 ================= 00:02:05.444 Libraries Enabled 00:02:05.444 ================= 00:02:05.444 00:02:05.444 libs: 00:02:05.444 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:05.444 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:05.444 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:05.444 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:05.444 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:05.444 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:05.444 table, pipeline, graph, node, 00:02:05.444 00:02:05.444 Message: 00:02:05.444 =============== 00:02:05.444 Drivers Enabled 00:02:05.444 =============== 00:02:05.444 00:02:05.444 common: 00:02:05.444 00:02:05.444 bus: 00:02:05.444 pci, vdev, 00:02:05.444 mempool: 00:02:05.444 ring, 00:02:05.444 dma: 00:02:05.444 00:02:05.444 net: 00:02:05.444 i40e, 00:02:05.444 raw: 00:02:05.444 00:02:05.444 crypto: 00:02:05.444 00:02:05.444 compress: 00:02:05.444 00:02:05.444 regex: 00:02:05.444 00:02:05.444 vdpa: 00:02:05.444 00:02:05.444 event: 00:02:05.444 00:02:05.444 baseband: 00:02:05.444 00:02:05.444 gpu: 00:02:05.444 00:02:05.444 00:02:05.444 Message: 00:02:05.444 ================= 00:02:05.444 Content Skipped 00:02:05.444 ================= 00:02:05.444 00:02:05.444 apps: 00:02:05.444 00:02:05.444 libs: 00:02:05.444 kni: explicitly disabled via build config (deprecated lib) 00:02:05.444 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:05.444 00:02:05.444 drivers: 00:02:05.444 common/cpt: not in enabled drivers build config 00:02:05.444 common/dpaax: not in enabled drivers build config 00:02:05.444 common/iavf: not in enabled drivers build config 00:02:05.444 common/idpf: not in enabled drivers build config 00:02:05.444 common/mvep: not in enabled drivers build config 00:02:05.444 common/octeontx: not in enabled drivers build config 00:02:05.444 bus/auxiliary: not in enabled drivers build config 00:02:05.444 bus/dpaa: not in enabled drivers build config 00:02:05.444 bus/fslmc: not in enabled drivers build config 00:02:05.444 bus/ifpga: not in enabled drivers build config 00:02:05.444 bus/vmbus: not in enabled drivers build config 00:02:05.444 common/cnxk: not in enabled drivers build config 00:02:05.444 common/mlx5: not in enabled drivers build config 00:02:05.444 common/qat: not in enabled drivers build config 00:02:05.444 common/sfc_efx: not in enabled drivers build config 00:02:05.444 mempool/bucket: not in enabled drivers build config 00:02:05.444 mempool/cnxk: not in enabled drivers build config 00:02:05.444 mempool/dpaa: not in enabled drivers build config 00:02:05.444 mempool/dpaa2: not in enabled drivers build config 00:02:05.444 mempool/octeontx: not in enabled drivers build config 00:02:05.444 mempool/stack: not in enabled drivers build config 00:02:05.444 dma/cnxk: not in enabled drivers build config 00:02:05.444 dma/dpaa: not in enabled drivers build config 00:02:05.444 dma/dpaa2: not in enabled drivers build config 00:02:05.444 dma/hisilicon: not in enabled drivers build config 00:02:05.444 dma/idxd: not in enabled drivers build config 00:02:05.444 dma/ioat: not in enabled drivers build config 00:02:05.444 dma/skeleton: not in enabled drivers build config 00:02:05.444 net/af_packet: not in enabled drivers build config 00:02:05.444 net/af_xdp: not in enabled drivers build config 00:02:05.444 net/ark: not in enabled drivers build config 00:02:05.445 net/atlantic: not in enabled drivers build config 00:02:05.445 net/avp: not in enabled drivers build config 00:02:05.445 net/axgbe: not in enabled drivers build config 00:02:05.445 net/bnx2x: not in enabled drivers build config 00:02:05.445 net/bnxt: not in enabled drivers build config 00:02:05.445 net/bonding: not in enabled drivers build config 00:02:05.445 net/cnxk: not in enabled drivers build config 00:02:05.445 net/cxgbe: not in enabled drivers build config 00:02:05.445 net/dpaa: not in enabled drivers build config 00:02:05.445 net/dpaa2: not in enabled drivers build config 00:02:05.445 net/e1000: not in enabled drivers build config 00:02:05.445 net/ena: not in enabled drivers build config 00:02:05.445 net/enetc: not in enabled drivers build config 00:02:05.445 net/enetfec: not in enabled drivers build config 00:02:05.445 net/enic: not in enabled drivers build config 00:02:05.445 net/failsafe: not in enabled drivers build config 00:02:05.445 net/fm10k: not in enabled drivers build config 00:02:05.445 net/gve: not in enabled drivers build config 00:02:05.445 net/hinic: not in enabled drivers build config 00:02:05.445 net/hns3: not in enabled drivers build config 00:02:05.445 net/iavf: not in enabled drivers build config 00:02:05.445 net/ice: not in enabled drivers build config 00:02:05.445 net/idpf: not in enabled drivers build config 00:02:05.445 net/igc: not in enabled drivers build config 00:02:05.445 net/ionic: not in enabled drivers build config 00:02:05.445 net/ipn3ke: not in enabled drivers build config 00:02:05.445 net/ixgbe: not in enabled drivers build config 00:02:05.445 net/kni: not in enabled drivers build config 00:02:05.445 net/liquidio: not in enabled drivers build config 00:02:05.445 net/mana: not in enabled drivers build config 00:02:05.445 net/memif: not in enabled drivers build config 00:02:05.445 net/mlx4: not in enabled drivers build config 00:02:05.445 net/mlx5: not in enabled drivers build config 00:02:05.445 net/mvneta: not in enabled drivers build config 00:02:05.445 net/mvpp2: not in enabled drivers build config 00:02:05.445 net/netvsc: not in enabled drivers build config 00:02:05.445 net/nfb: not in enabled drivers build config 00:02:05.445 net/nfp: not in enabled drivers build config 00:02:05.445 net/ngbe: not in enabled drivers build config 00:02:05.445 net/null: not in enabled drivers build config 00:02:05.445 net/octeontx: not in enabled drivers build config 00:02:05.445 net/octeon_ep: not in enabled drivers build config 00:02:05.445 net/pcap: not in enabled drivers build config 00:02:05.445 net/pfe: not in enabled drivers build config 00:02:05.445 net/qede: not in enabled drivers build config 00:02:05.445 net/ring: not in enabled drivers build config 00:02:05.445 net/sfc: not in enabled drivers build config 00:02:05.445 net/softnic: not in enabled drivers build config 00:02:05.445 net/tap: not in enabled drivers build config 00:02:05.445 net/thunderx: not in enabled drivers build config 00:02:05.445 net/txgbe: not in enabled drivers build config 00:02:05.445 net/vdev_netvsc: not in enabled drivers build config 00:02:05.445 net/vhost: not in enabled drivers build config 00:02:05.445 net/virtio: not in enabled drivers build config 00:02:05.445 net/vmxnet3: not in enabled drivers build config 00:02:05.445 raw/cnxk_bphy: not in enabled drivers build config 00:02:05.445 raw/cnxk_gpio: not in enabled drivers build config 00:02:05.445 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:05.445 raw/ifpga: not in enabled drivers build config 00:02:05.445 raw/ntb: not in enabled drivers build config 00:02:05.445 raw/skeleton: not in enabled drivers build config 00:02:05.445 crypto/armv8: not in enabled drivers build config 00:02:05.445 crypto/bcmfs: not in enabled drivers build config 00:02:05.445 crypto/caam_jr: not in enabled drivers build config 00:02:05.445 crypto/ccp: not in enabled drivers build config 00:02:05.445 crypto/cnxk: not in enabled drivers build config 00:02:05.445 crypto/dpaa_sec: not in enabled drivers build config 00:02:05.445 crypto/dpaa2_sec: not in enabled drivers build config 00:02:05.445 crypto/ipsec_mb: not in enabled drivers build config 00:02:05.445 crypto/mlx5: not in enabled drivers build config 00:02:05.445 crypto/mvsam: not in enabled drivers build config 00:02:05.445 crypto/nitrox: not in enabled drivers build config 00:02:05.445 crypto/null: not in enabled drivers build config 00:02:05.445 crypto/octeontx: not in enabled drivers build config 00:02:05.445 crypto/openssl: not in enabled drivers build config 00:02:05.445 crypto/scheduler: not in enabled drivers build config 00:02:05.445 crypto/uadk: not in enabled drivers build config 00:02:05.445 crypto/virtio: not in enabled drivers build config 00:02:05.445 compress/isal: not in enabled drivers build config 00:02:05.445 compress/mlx5: not in enabled drivers build config 00:02:05.445 compress/octeontx: not in enabled drivers build config 00:02:05.445 compress/zlib: not in enabled drivers build config 00:02:05.445 regex/mlx5: not in enabled drivers build config 00:02:05.445 regex/cn9k: not in enabled drivers build config 00:02:05.445 vdpa/ifc: not in enabled drivers build config 00:02:05.445 vdpa/mlx5: not in enabled drivers build config 00:02:05.445 vdpa/sfc: not in enabled drivers build config 00:02:05.445 event/cnxk: not in enabled drivers build config 00:02:05.445 event/dlb2: not in enabled drivers build config 00:02:05.445 event/dpaa: not in enabled drivers build config 00:02:05.445 event/dpaa2: not in enabled drivers build config 00:02:05.445 event/dsw: not in enabled drivers build config 00:02:05.445 event/opdl: not in enabled drivers build config 00:02:05.445 event/skeleton: not in enabled drivers build config 00:02:05.445 event/sw: not in enabled drivers build config 00:02:05.445 event/octeontx: not in enabled drivers build config 00:02:05.445 baseband/acc: not in enabled drivers build config 00:02:05.445 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:05.445 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:05.445 baseband/la12xx: not in enabled drivers build config 00:02:05.445 baseband/null: not in enabled drivers build config 00:02:05.445 baseband/turbo_sw: not in enabled drivers build config 00:02:05.445 gpu/cuda: not in enabled drivers build config 00:02:05.445 00:02:05.445 00:02:05.445 Build targets in project: 314 00:02:05.445 00:02:05.445 DPDK 22.11.4 00:02:05.445 00:02:05.445 User defined options 00:02:05.445 libdir : lib 00:02:05.445 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:05.445 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:05.445 c_link_args : 00:02:05.445 enable_docs : false 00:02:05.445 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:05.445 enable_kmods : false 00:02:05.445 machine : native 00:02:05.445 tests : false 00:02:05.445 00:02:05.445 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:05.445 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:05.445 05:42:26 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:05.445 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:05.445 [1/743] Generating lib/rte_kvargs_def with a custom command 00:02:05.445 [2/743] Generating lib/rte_kvargs_mingw with a custom command 00:02:05.445 [3/743] Generating lib/rte_telemetry_mingw with a custom command 00:02:05.445 [4/743] Generating lib/rte_telemetry_def with a custom command 00:02:05.445 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:05.445 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:05.445 [7/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:05.445 [8/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:05.445 [9/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:05.445 [10/743] Linking static target lib/librte_kvargs.a 00:02:05.445 [11/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:05.445 [12/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:05.445 [13/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:05.445 [14/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:05.703 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:05.703 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:05.703 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:05.703 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:05.703 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:05.703 [20/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.703 [21/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:05.703 [22/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:05.703 [23/743] Linking target lib/librte_kvargs.so.23.0 00:02:05.703 [24/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:05.703 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:05.961 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:05.961 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:05.961 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:05.961 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:05.961 [30/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:05.961 [31/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:05.961 [32/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:05.961 [33/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:05.961 [34/743] Linking static target lib/librte_telemetry.a 00:02:05.961 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:05.961 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:05.961 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:06.219 [38/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:06.219 [39/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:06.219 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:06.219 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:06.219 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:06.219 [43/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.477 [44/743] Linking target lib/librte_telemetry.so.23.0 00:02:06.477 [45/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:06.477 [46/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:06.477 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:06.477 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:06.477 [49/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:06.477 [50/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:06.478 [51/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:06.478 [52/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:06.478 [53/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:06.478 [54/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:06.478 [55/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:06.478 [56/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:06.478 [57/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:06.736 [58/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:06.736 [59/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:06.736 [60/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:06.736 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:06.736 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:06.736 [63/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:06.736 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:06.736 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:06.736 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:06.736 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:06.736 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:06.736 [69/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:06.736 [70/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:06.994 [71/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:06.994 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:06.994 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:06.994 [74/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:06.994 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:06.994 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:06.994 [77/743] Generating lib/rte_eal_mingw with a custom command 00:02:06.994 [78/743] Generating lib/rte_eal_def with a custom command 00:02:06.994 [79/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:06.994 [80/743] Generating lib/rte_ring_def with a custom command 00:02:06.994 [81/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:06.994 [82/743] Generating lib/rte_ring_mingw with a custom command 00:02:06.994 [83/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:06.994 [84/743] Generating lib/rte_rcu_def with a custom command 00:02:06.994 [85/743] Generating lib/rte_rcu_mingw with a custom command 00:02:06.994 [86/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:07.252 [87/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:07.252 [88/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:07.252 [89/743] Linking static target lib/librte_ring.a 00:02:07.252 [90/743] Generating lib/rte_mempool_def with a custom command 00:02:07.252 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:02:07.252 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:07.252 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:07.510 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.510 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:07.510 [96/743] Linking static target lib/librte_eal.a 00:02:07.767 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:07.768 [98/743] Generating lib/rte_mbuf_def with a custom command 00:02:07.768 [99/743] Generating lib/rte_mbuf_mingw with a custom command 00:02:07.768 [100/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:07.768 [101/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:07.768 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:07.768 [103/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:07.768 [104/743] Linking static target lib/librte_rcu.a 00:02:08.025 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:08.025 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:08.025 [107/743] Linking static target lib/librte_mempool.a 00:02:08.283 [108/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:08.283 [109/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.283 [110/743] Generating lib/rte_net_def with a custom command 00:02:08.283 [111/743] Generating lib/rte_net_mingw with a custom command 00:02:08.283 [112/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:08.283 [113/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:08.283 [114/743] Generating lib/rte_meter_def with a custom command 00:02:08.283 [115/743] Generating lib/rte_meter_mingw with a custom command 00:02:08.541 [116/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:08.541 [117/743] Linking static target lib/librte_meter.a 00:02:08.541 [118/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:08.541 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:08.541 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:08.541 [121/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.541 [122/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:08.799 [123/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:08.799 [124/743] Linking static target lib/librte_mbuf.a 00:02:08.799 [125/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:08.799 [126/743] Linking static target lib/librte_net.a 00:02:09.057 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.057 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.057 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:09.315 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:09.315 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:09.315 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:09.315 [133/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.315 [134/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:09.573 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:09.830 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:09.830 [137/743] Generating lib/rte_ethdev_def with a custom command 00:02:10.089 [138/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:10.089 [139/743] Generating lib/rte_ethdev_mingw with a custom command 00:02:10.089 [140/743] Generating lib/rte_pci_def with a custom command 00:02:10.089 [141/743] Generating lib/rte_pci_mingw with a custom command 00:02:10.089 [142/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:10.089 [143/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:10.089 [144/743] Linking static target lib/librte_pci.a 00:02:10.089 [145/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:10.089 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:10.089 [147/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:10.089 [148/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:10.348 [149/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:10.348 [150/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.348 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:10.348 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:10.348 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:10.348 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:10.348 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:10.348 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:10.348 [157/743] Generating lib/rte_cmdline_def with a custom command 00:02:10.348 [158/743] Generating lib/rte_cmdline_mingw with a custom command 00:02:10.348 [159/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:10.606 [160/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:10.606 [161/743] Generating lib/rte_metrics_def with a custom command 00:02:10.606 [162/743] Generating lib/rte_metrics_mingw with a custom command 00:02:10.606 [163/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:10.606 [164/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:10.606 [165/743] Generating lib/rte_hash_def with a custom command 00:02:10.606 [166/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:10.606 [167/743] Generating lib/rte_hash_mingw with a custom command 00:02:10.606 [168/743] Generating lib/rte_timer_def with a custom command 00:02:10.606 [169/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:10.606 [170/743] Generating lib/rte_timer_mingw with a custom command 00:02:10.863 [171/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:10.863 [172/743] Linking static target lib/librte_cmdline.a 00:02:10.863 [173/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:11.120 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:11.121 [175/743] Linking static target lib/librte_metrics.a 00:02:11.121 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:11.121 [177/743] Linking static target lib/librte_timer.a 00:02:11.378 [178/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.378 [179/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.636 [180/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.636 [181/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:11.636 [182/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:11.636 [183/743] Linking static target lib/librte_ethdev.a 00:02:11.636 [184/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:12.202 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:12.202 [186/743] Generating lib/rte_acl_def with a custom command 00:02:12.202 [187/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:12.202 [188/743] Generating lib/rte_acl_mingw with a custom command 00:02:12.202 [189/743] Generating lib/rte_bbdev_def with a custom command 00:02:12.202 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:02:12.460 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:12.460 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:02:12.460 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:02:12.718 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:12.976 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:12.976 [196/743] Linking static target lib/librte_bitratestats.a 00:02:12.976 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:13.234 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.234 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:13.234 [200/743] Linking static target lib/librte_bbdev.a 00:02:13.234 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:13.492 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:13.492 [203/743] Linking static target lib/librte_hash.a 00:02:13.749 [204/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:13.749 [205/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.749 [206/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:14.007 [207/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:14.007 [208/743] Linking static target lib/acl/libavx512_tmp.a 00:02:14.007 [209/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:14.265 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.265 [211/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:14.265 [212/743] Generating lib/rte_bpf_def with a custom command 00:02:14.265 [213/743] Generating lib/rte_bpf_mingw with a custom command 00:02:14.265 [214/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:14.265 [215/743] Generating lib/rte_cfgfile_def with a custom command 00:02:14.522 [216/743] Generating lib/rte_cfgfile_mingw with a custom command 00:02:14.522 [217/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:14.522 [218/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:14.522 [219/743] Linking static target lib/librte_acl.a 00:02:14.522 [220/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:14.779 [221/743] Linking static target lib/librte_cfgfile.a 00:02:14.779 [222/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:14.779 [223/743] Generating lib/rte_compressdev_def with a custom command 00:02:14.779 [224/743] Generating lib/rte_compressdev_mingw with a custom command 00:02:14.779 [225/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.037 [226/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.037 [227/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.037 [228/743] Linking target lib/librte_eal.so.23.0 00:02:15.037 [229/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:15.037 [230/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:15.037 [231/743] Generating lib/rte_cryptodev_def with a custom command 00:02:15.037 [232/743] Generating lib/rte_cryptodev_mingw with a custom command 00:02:15.037 [233/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:15.037 [234/743] Linking target lib/librte_ring.so.23.0 00:02:15.295 [235/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:15.295 [236/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:15.295 [237/743] Linking target lib/librte_meter.so.23.0 00:02:15.295 [238/743] Linking target lib/librte_pci.so.23.0 00:02:15.295 [239/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:15.295 [240/743] Linking target lib/librte_rcu.so.23.0 00:02:15.295 [241/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:15.295 [242/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:15.295 [243/743] Linking target lib/librte_mempool.so.23.0 00:02:15.295 [244/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:15.295 [245/743] Linking target lib/librte_timer.so.23.0 00:02:15.295 [246/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:15.553 [247/743] Linking static target lib/librte_bpf.a 00:02:15.553 [248/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:15.553 [249/743] Linking target lib/librte_acl.so.23.0 00:02:15.553 [250/743] Linking target lib/librte_cfgfile.so.23.0 00:02:15.553 [251/743] Linking static target lib/librte_compressdev.a 00:02:15.553 [252/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:15.553 [253/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:15.553 [254/743] Linking target lib/librte_mbuf.so.23.0 00:02:15.553 [255/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:15.553 [256/743] Generating lib/rte_distributor_def with a custom command 00:02:15.553 [257/743] Generating lib/rte_distributor_mingw with a custom command 00:02:15.811 [258/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:15.811 [259/743] Linking target lib/librte_net.so.23.0 00:02:15.811 [260/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.811 [261/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:15.811 [262/743] Linking target lib/librte_bbdev.so.23.0 00:02:15.811 [263/743] Generating lib/rte_efd_def with a custom command 00:02:15.811 [264/743] Generating lib/rte_efd_mingw with a custom command 00:02:15.811 [265/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:15.811 [266/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:15.811 [267/743] Linking target lib/librte_cmdline.so.23.0 00:02:16.069 [268/743] Linking target lib/librte_hash.so.23.0 00:02:16.069 [269/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:16.069 [270/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:16.069 [271/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:16.069 [272/743] Linking static target lib/librte_distributor.a 00:02:16.327 [273/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.327 [274/743] Linking target lib/librte_compressdev.so.23.0 00:02:16.327 [275/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.327 [276/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:16.327 [277/743] Linking target lib/librte_distributor.so.23.0 00:02:16.585 [278/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.585 [279/743] Linking target lib/librte_ethdev.so.23.0 00:02:16.585 [280/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:16.585 [281/743] Generating lib/rte_eventdev_def with a custom command 00:02:16.585 [282/743] Generating lib/rte_eventdev_mingw with a custom command 00:02:16.585 [283/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:16.585 [284/743] Linking target lib/librte_metrics.so.23.0 00:02:16.843 [285/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:16.843 [286/743] Linking target lib/librte_bitratestats.so.23.0 00:02:16.843 [287/743] Linking target lib/librte_bpf.so.23.0 00:02:17.101 [288/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:17.101 [289/743] Generating lib/rte_gpudev_def with a custom command 00:02:17.101 [290/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:17.101 [291/743] Generating lib/rte_gpudev_mingw with a custom command 00:02:17.359 [292/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:17.359 [293/743] Linking static target lib/librte_efd.a 00:02:17.359 [294/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:17.359 [295/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:17.616 [296/743] Linking static target lib/librte_cryptodev.a 00:02:17.616 [297/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.616 [298/743] Linking target lib/librte_efd.so.23.0 00:02:17.616 [299/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:17.616 [300/743] Linking static target lib/librte_gpudev.a 00:02:17.616 [301/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:17.875 [302/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:17.875 [303/743] Generating lib/rte_gro_def with a custom command 00:02:17.875 [304/743] Generating lib/rte_gro_mingw with a custom command 00:02:17.875 [305/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:17.875 [306/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:17.875 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:18.132 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:18.390 [309/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:18.390 [310/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.390 [311/743] Linking target lib/librte_gpudev.so.23.0 00:02:18.390 [312/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:18.390 [313/743] Linking static target lib/librte_gro.a 00:02:18.390 [314/743] Generating lib/rte_gso_def with a custom command 00:02:18.648 [315/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:18.648 [316/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:18.648 [317/743] Generating lib/rte_gso_mingw with a custom command 00:02:18.648 [318/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:18.648 [319/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.648 [320/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:18.648 [321/743] Linking target lib/librte_gro.so.23.0 00:02:18.906 [322/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:18.906 [323/743] Generating lib/rte_ip_frag_def with a custom command 00:02:18.906 [324/743] Generating lib/rte_ip_frag_mingw with a custom command 00:02:19.163 [325/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:19.163 [326/743] Linking static target lib/librte_jobstats.a 00:02:19.164 [327/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:19.164 [328/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:19.164 [329/743] Linking static target lib/librte_eventdev.a 00:02:19.164 [330/743] Generating lib/rte_jobstats_def with a custom command 00:02:19.164 [331/743] Linking static target lib/librte_gso.a 00:02:19.164 [332/743] Generating lib/rte_jobstats_mingw with a custom command 00:02:19.164 [333/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:19.164 [334/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.422 [335/743] Linking target lib/librte_gso.so.23.0 00:02:19.422 [336/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:19.422 [337/743] Generating lib/rte_latencystats_def with a custom command 00:02:19.422 [338/743] Generating lib/rte_latencystats_mingw with a custom command 00:02:19.422 [339/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:19.422 [340/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.422 [341/743] Generating lib/rte_lpm_def with a custom command 00:02:19.422 [342/743] Linking target lib/librte_jobstats.so.23.0 00:02:19.422 [343/743] Generating lib/rte_lpm_mingw with a custom command 00:02:19.422 [344/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:19.422 [345/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.680 [346/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:19.680 [347/743] Linking target lib/librte_cryptodev.so.23.0 00:02:19.680 [348/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:19.680 [349/743] Linking static target lib/librte_ip_frag.a 00:02:19.680 [350/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:19.938 [351/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.938 [352/743] Linking target lib/librte_ip_frag.so.23.0 00:02:19.938 [353/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:19.938 [354/743] Linking static target lib/librte_latencystats.a 00:02:20.196 [355/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:20.197 [356/743] Generating lib/rte_member_def with a custom command 00:02:20.197 [357/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:20.197 [358/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:20.197 [359/743] Generating lib/rte_member_mingw with a custom command 00:02:20.197 [360/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:20.197 [361/743] Generating lib/rte_pcapng_mingw with a custom command 00:02:20.197 [362/743] Generating lib/rte_pcapng_def with a custom command 00:02:20.197 [363/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.197 [364/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:20.197 [365/743] Linking target lib/librte_latencystats.so.23.0 00:02:20.474 [366/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:20.474 [367/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:20.474 [368/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:20.474 [369/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:20.474 [370/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:20.739 [371/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:20.739 [372/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:20.739 [373/743] Linking static target lib/librte_lpm.a 00:02:20.739 [374/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:20.997 [375/743] Generating lib/rte_power_def with a custom command 00:02:20.997 [376/743] Generating lib/rte_power_mingw with a custom command 00:02:20.997 [377/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.997 [378/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:20.997 [379/743] Linking target lib/librte_eventdev.so.23.0 00:02:20.997 [380/743] Generating lib/rte_rawdev_def with a custom command 00:02:20.997 [381/743] Generating lib/rte_rawdev_mingw with a custom command 00:02:20.997 [382/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:21.256 [383/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.256 [384/743] Generating lib/rte_regexdev_def with a custom command 00:02:21.256 [385/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:21.256 [386/743] Generating lib/rte_regexdev_mingw with a custom command 00:02:21.256 [387/743] Linking target lib/librte_lpm.so.23.0 00:02:21.256 [388/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:21.256 [389/743] Linking static target lib/librte_pcapng.a 00:02:21.256 [390/743] Generating lib/rte_dmadev_def with a custom command 00:02:21.256 [391/743] Generating lib/rte_dmadev_mingw with a custom command 00:02:21.256 [392/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:21.256 [393/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:21.256 [394/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:21.256 [395/743] Generating lib/rte_rib_def with a custom command 00:02:21.256 [396/743] Linking static target lib/librte_rawdev.a 00:02:21.256 [397/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:21.256 [398/743] Generating lib/rte_rib_mingw with a custom command 00:02:21.256 [399/743] Generating lib/rte_reorder_def with a custom command 00:02:21.514 [400/743] Generating lib/rte_reorder_mingw with a custom command 00:02:21.514 [401/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.514 [402/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:21.514 [403/743] Linking static target lib/librte_power.a 00:02:21.514 [404/743] Linking target lib/librte_pcapng.so.23.0 00:02:21.514 [405/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:21.514 [406/743] Linking static target lib/librte_dmadev.a 00:02:21.772 [407/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:21.772 [408/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.772 [409/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:21.772 [410/743] Linking target lib/librte_rawdev.so.23.0 00:02:21.772 [411/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:21.772 [412/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:22.030 [413/743] Linking static target lib/librte_regexdev.a 00:02:22.030 [414/743] Generating lib/rte_sched_def with a custom command 00:02:22.030 [415/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:22.030 [416/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:22.030 [417/743] Generating lib/rte_sched_mingw with a custom command 00:02:22.030 [418/743] Linking static target lib/librte_member.a 00:02:22.030 [419/743] Generating lib/rte_security_def with a custom command 00:02:22.030 [420/743] Generating lib/rte_security_mingw with a custom command 00:02:22.030 [421/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:22.030 [422/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.031 [423/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:22.289 [424/743] Linking target lib/librte_dmadev.so.23.0 00:02:22.289 [425/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:22.289 [426/743] Linking static target lib/librte_reorder.a 00:02:22.289 [427/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:22.289 [428/743] Generating lib/rte_stack_def with a custom command 00:02:22.289 [429/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:22.289 [430/743] Linking static target lib/librte_stack.a 00:02:22.289 [431/743] Generating lib/rte_stack_mingw with a custom command 00:02:22.289 [432/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:22.289 [433/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.289 [434/743] Linking target lib/librte_member.so.23.0 00:02:22.547 [435/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.547 [436/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:22.547 [437/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.547 [438/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.547 [439/743] Linking target lib/librte_reorder.so.23.0 00:02:22.547 [440/743] Linking target lib/librte_stack.so.23.0 00:02:22.547 [441/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:22.547 [442/743] Linking static target lib/librte_rib.a 00:02:22.547 [443/743] Linking target lib/librte_power.so.23.0 00:02:22.547 [444/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.547 [445/743] Linking target lib/librte_regexdev.so.23.0 00:02:22.805 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:22.805 [447/743] Linking static target lib/librte_security.a 00:02:22.805 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.805 [449/743] Linking target lib/librte_rib.so.23.0 00:02:23.064 [450/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:23.064 [451/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:23.064 [452/743] Generating lib/rte_vhost_def with a custom command 00:02:23.064 [453/743] Generating lib/rte_vhost_mingw with a custom command 00:02:23.322 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:23.322 [455/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.322 [456/743] Linking target lib/librte_security.so.23.0 00:02:23.322 [457/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:23.322 [458/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:23.583 [459/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:23.583 [460/743] Linking static target lib/librte_sched.a 00:02:23.845 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.845 [462/743] Linking target lib/librte_sched.so.23.0 00:02:24.103 [463/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:24.103 [464/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:24.103 [465/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:24.103 [466/743] Generating lib/rte_ipsec_def with a custom command 00:02:24.103 [467/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:24.103 [468/743] Generating lib/rte_ipsec_mingw with a custom command 00:02:24.103 [469/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:24.361 [470/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:24.361 [471/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:24.619 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:24.619 [473/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:24.619 [474/743] Generating lib/rte_fib_def with a custom command 00:02:24.619 [475/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:24.619 [476/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:24.619 [477/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:24.619 [478/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:24.877 [479/743] Generating lib/rte_fib_mingw with a custom command 00:02:24.877 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:25.134 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:25.134 [482/743] Linking static target lib/librte_ipsec.a 00:02:25.392 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.392 [484/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:25.392 [485/743] Linking target lib/librte_ipsec.so.23.0 00:02:25.650 [486/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:25.650 [487/743] Linking static target lib/librte_fib.a 00:02:25.650 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:25.650 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:25.650 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:25.909 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:25.909 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.909 [493/743] Linking target lib/librte_fib.so.23.0 00:02:26.167 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:26.426 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:26.685 [496/743] Generating lib/rte_port_def with a custom command 00:02:26.685 [497/743] Generating lib/rte_port_mingw with a custom command 00:02:26.685 [498/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:26.685 [499/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:26.685 [500/743] Generating lib/rte_pdump_def with a custom command 00:02:26.685 [501/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:26.685 [502/743] Generating lib/rte_pdump_mingw with a custom command 00:02:26.943 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:26.943 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:26.943 [505/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:27.202 [506/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:27.202 [507/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:27.202 [508/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:27.202 [509/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:27.202 [510/743] Linking static target lib/librte_port.a 00:02:27.460 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:27.718 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:27.718 [513/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.718 [514/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:27.718 [515/743] Linking target lib/librte_port.so.23.0 00:02:27.718 [516/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:27.976 [517/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:27.976 [518/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:27.976 [519/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:27.976 [520/743] Linking static target lib/librte_pdump.a 00:02:28.234 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.234 [522/743] Linking target lib/librte_pdump.so.23.0 00:02:28.491 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:28.491 [524/743] Generating lib/rte_table_def with a custom command 00:02:28.491 [525/743] Generating lib/rte_table_mingw with a custom command 00:02:28.491 [526/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:28.749 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:28.749 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:28.749 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:29.007 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:29.007 [531/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:29.007 [532/743] Generating lib/rte_pipeline_def with a custom command 00:02:29.007 [533/743] Generating lib/rte_pipeline_mingw with a custom command 00:02:29.007 [534/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:29.265 [535/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:29.265 [536/743] Linking static target lib/librte_table.a 00:02:29.265 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:29.524 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:29.782 [539/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.782 [540/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:29.782 [541/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:29.782 [542/743] Linking target lib/librte_table.so.23.0 00:02:30.040 [543/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:30.040 [544/743] Generating lib/rte_graph_def with a custom command 00:02:30.040 [545/743] Generating lib/rte_graph_mingw with a custom command 00:02:30.040 [546/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:30.297 [547/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:30.297 [548/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:30.556 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:30.556 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:30.556 [551/743] Linking static target lib/librte_graph.a 00:02:30.556 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:30.813 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:30.813 [554/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:31.075 [555/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:31.383 [556/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:31.383 [557/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:31.383 [558/743] Generating lib/rte_node_def with a custom command 00:02:31.383 [559/743] Generating lib/rte_node_mingw with a custom command 00:02:31.383 [560/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:31.383 [561/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.383 [562/743] Linking target lib/librte_graph.so.23.0 00:02:31.383 [563/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:31.641 [564/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:31.641 [565/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:31.641 [566/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:31.641 [567/743] Generating drivers/rte_bus_pci_def with a custom command 00:02:31.641 [568/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:31.641 [569/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:31.899 [570/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:31.899 [571/743] Generating drivers/rte_bus_vdev_def with a custom command 00:02:31.899 [572/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:31.899 [573/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:31.899 [574/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:31.899 [575/743] Generating drivers/rte_mempool_ring_def with a custom command 00:02:31.899 [576/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:31.899 [577/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:31.899 [578/743] Linking static target lib/librte_node.a 00:02:31.899 [579/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:31.899 [580/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:32.157 [581/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:32.157 [582/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.157 [583/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:32.157 [584/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:32.157 [585/743] Linking target lib/librte_node.so.23.0 00:02:32.157 [586/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:32.157 [587/743] Linking static target drivers/librte_bus_vdev.a 00:02:32.157 [588/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:32.157 [589/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:32.415 [590/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.415 [591/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:32.415 [592/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:32.415 [593/743] Linking static target drivers/librte_bus_pci.a 00:02:32.415 [594/743] Linking target drivers/librte_bus_vdev.so.23.0 00:02:32.415 [595/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:32.673 [596/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:32.931 [597/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.931 [598/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:32.931 [599/743] Linking target drivers/librte_bus_pci.so.23.0 00:02:32.931 [600/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:32.931 [601/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:32.931 [602/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:33.190 [603/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:33.190 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:33.190 [605/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:33.447 [606/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:33.447 [607/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:33.448 [608/743] Linking static target drivers/librte_mempool_ring.a 00:02:33.448 [609/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:33.448 [610/743] Linking target drivers/librte_mempool_ring.so.23.0 00:02:33.705 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:34.272 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:34.272 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:34.272 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:34.838 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:34.838 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:34.838 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:35.404 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:35.404 [619/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:35.662 [620/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:35.662 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:35.662 [622/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:35.662 [623/743] Generating drivers/rte_net_i40e_def with a custom command 00:02:35.662 [624/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:35.920 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:36.854 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:37.112 [627/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:37.112 [628/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:37.112 [629/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:37.112 [630/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:37.371 [631/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:37.371 [632/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:37.371 [633/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:37.371 [634/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:37.630 [635/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:37.630 [636/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:38.195 [637/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:38.195 [638/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:38.195 [639/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:38.452 [640/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:38.452 [641/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:38.452 [642/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:38.452 [643/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:38.452 [644/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:38.452 [645/743] Linking static target drivers/librte_net_i40e.a 00:02:38.710 [646/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:38.710 [647/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:38.968 [648/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:38.968 [649/743] Linking static target lib/librte_vhost.a 00:02:38.968 [650/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:39.226 [651/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:39.226 [652/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.226 [653/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:39.226 [654/743] Linking target drivers/librte_net_i40e.so.23.0 00:02:39.484 [655/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:39.484 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:39.742 [657/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:40.000 [658/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:40.000 [659/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:40.258 [660/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.258 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:40.258 [662/743] Linking target lib/librte_vhost.so.23.0 00:02:40.258 [663/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:40.258 [664/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:40.258 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:40.516 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:40.516 [667/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:40.516 [668/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:40.774 [669/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:40.774 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:41.033 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:41.291 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:41.291 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:41.550 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:41.811 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:42.069 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:42.069 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:42.327 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:42.327 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:42.585 [680/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:42.585 [681/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:42.585 [682/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:42.845 [683/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:42.845 [684/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:42.845 [685/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:43.103 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:43.103 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:43.103 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:43.361 [689/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:43.361 [690/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:43.361 [691/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:43.621 [692/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:43.621 [693/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:43.621 [694/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:44.188 [695/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:44.188 [696/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:44.188 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:44.447 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:44.447 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:45.014 [700/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:45.014 [701/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:45.014 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:45.272 [703/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:45.272 [704/743] Linking static target lib/librte_pipeline.a 00:02:45.272 [705/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:45.272 [706/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:45.531 [707/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:45.789 [708/743] Linking target app/dpdk-dumpcap 00:02:45.789 [709/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:45.789 [710/743] Linking target app/dpdk-pdump 00:02:45.789 [711/743] Linking target app/dpdk-proc-info 00:02:45.789 [712/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:46.048 [713/743] Linking target app/dpdk-test-acl 00:02:46.048 [714/743] Linking target app/dpdk-test-bbdev 00:02:46.307 [715/743] Linking target app/dpdk-test-cmdline 00:02:46.307 [716/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:46.307 [717/743] Linking target app/dpdk-test-compress-perf 00:02:46.307 [718/743] Linking target app/dpdk-test-crypto-perf 00:02:46.565 [719/743] Linking target app/dpdk-test-fib 00:02:46.565 [720/743] Linking target app/dpdk-test-eventdev 00:02:46.565 [721/743] Linking target app/dpdk-test-flow-perf 00:02:46.565 [722/743] Linking target app/dpdk-test-gpudev 00:02:46.565 [723/743] Linking target app/dpdk-test-pipeline 00:02:46.824 [724/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:46.824 [725/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:47.083 [726/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:47.342 [727/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:47.342 [728/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:47.601 [729/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:47.601 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:47.859 [731/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.859 [732/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:47.859 [733/743] Linking target lib/librte_pipeline.so.23.0 00:02:47.859 [734/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:47.859 [735/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:48.118 [736/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:48.118 [737/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:48.377 [738/743] Linking target app/dpdk-test-regex 00:02:48.377 [739/743] Linking target app/dpdk-test-sad 00:02:48.635 [740/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:48.636 [741/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:49.204 [742/743] Linking target app/dpdk-testpmd 00:02:49.204 [743/743] Linking target app/dpdk-test-security-perf 00:02:49.204 05:43:10 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:49.204 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:49.204 [0/1] Installing files. 00:02:49.465 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:49.465 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.466 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:49.467 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.468 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:49.728 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:49.729 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:49.729 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.729 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.730 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.730 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.730 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.730 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.730 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.730 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.730 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.730 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.730 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.730 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.730 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.730 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.730 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.730 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.730 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.730 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:49.730 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.730 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:49.730 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.730 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:49.730 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:49.730 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:49.730 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.730 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.730 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.730 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.730 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.730 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.730 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.730 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.730 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.991 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.991 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.991 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.991 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.991 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.991 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.991 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.991 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.991 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.992 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.993 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:49.994 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:49.994 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:02:49.994 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:49.994 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:02:49.994 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:49.994 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:02:49.994 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:49.994 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:02:49.994 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:49.994 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:02:49.994 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:49.994 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:02:49.994 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:49.994 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:02:49.994 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:49.994 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:02:49.994 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:49.994 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:02:49.994 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:49.994 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:02:49.994 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:49.994 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:02:49.994 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:49.994 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:02:49.994 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:49.994 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:02:49.994 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:49.994 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:02:49.994 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:49.994 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:02:49.994 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:49.994 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:02:49.994 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:49.994 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:02:49.994 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:49.994 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:02:49.994 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:49.994 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:02:49.994 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:49.994 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:02:49.994 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:49.994 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:02:49.994 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:49.994 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:02:49.994 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:49.994 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:02:49.994 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:49.994 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:02:49.994 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:49.994 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:02:49.994 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:49.994 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:02:49.994 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:49.994 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:49.994 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:49.994 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:49.994 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:49.995 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:49.995 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:49.995 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:49.995 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:49.995 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:49.995 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:49.995 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:49.995 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:49.995 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:02:49.995 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:49.995 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:02:49.995 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:49.995 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:02:49.995 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:49.995 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:02:49.995 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:49.995 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:02:49.995 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:49.995 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:02:49.995 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:49.995 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:02:49.995 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:49.995 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:02:49.995 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:49.995 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:02:49.995 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:49.995 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:02:49.995 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:49.995 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:02:49.995 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:49.995 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:02:49.995 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:49.995 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:02:49.995 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:49.995 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:02:49.995 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:49.995 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:02:49.995 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:49.995 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:02:49.995 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:49.995 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:02:49.995 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:49.995 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:02:49.995 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:49.995 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:02:49.995 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:49.995 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:02:49.995 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:49.995 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:02:49.995 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:49.995 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:02:49.995 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:49.995 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:02:49.995 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:49.995 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:02:49.995 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:49.995 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:02:49.995 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:49.995 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:02:49.995 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:49.995 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:49.995 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:49.995 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:49.995 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:49.995 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:49.995 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:49.995 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:49.995 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:49.995 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:50.254 05:43:11 -- common/autobuild_common.sh@192 -- $ uname -s 00:02:50.254 05:43:11 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:50.254 05:43:11 -- common/autobuild_common.sh@203 -- $ cat 00:02:50.254 05:43:11 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:50.254 00:02:50.254 real 0m51.978s 00:02:50.254 user 6m11.768s 00:02:50.254 sys 0m54.973s 00:02:50.254 05:43:11 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:50.254 ************************************ 00:02:50.254 END TEST build_native_dpdk 00:02:50.254 05:43:11 -- common/autotest_common.sh@10 -- $ set +x 00:02:50.254 ************************************ 00:02:50.254 05:43:11 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:50.254 05:43:11 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:50.254 05:43:11 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:50.254 05:43:11 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:50.254 05:43:11 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:50.254 05:43:11 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:50.254 05:43:11 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:50.254 05:43:11 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:02:50.254 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:50.513 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.513 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:50.513 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:50.772 Using 'verbs' RDMA provider 00:03:03.914 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:18.794 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:18.794 Creating mk/config.mk...done. 00:03:18.794 Creating mk/cc.flags.mk...done. 00:03:18.794 Type 'make' to build. 00:03:18.794 05:43:38 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:18.794 05:43:38 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:03:18.794 05:43:38 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:18.794 05:43:38 -- common/autotest_common.sh@10 -- $ set +x 00:03:18.794 ************************************ 00:03:18.794 START TEST make 00:03:18.794 ************************************ 00:03:18.794 05:43:38 -- common/autotest_common.sh@1114 -- $ make -j10 00:03:18.794 make[1]: Nothing to be done for 'all'. 00:03:40.718 CC lib/ut/ut.o 00:03:40.718 CC lib/ut_mock/mock.o 00:03:40.718 CC lib/log/log.o 00:03:40.718 CC lib/log/log_deprecated.o 00:03:40.718 CC lib/log/log_flags.o 00:03:40.718 LIB libspdk_ut_mock.a 00:03:40.718 LIB libspdk_ut.a 00:03:40.718 SO libspdk_ut_mock.so.5.0 00:03:40.718 SO libspdk_ut.so.1.0 00:03:40.718 LIB libspdk_log.a 00:03:40.718 SYMLINK libspdk_ut_mock.so 00:03:40.718 SO libspdk_log.so.6.1 00:03:40.718 SYMLINK libspdk_ut.so 00:03:40.718 SYMLINK libspdk_log.so 00:03:40.718 CC lib/util/base64.o 00:03:40.718 CC lib/util/bit_array.o 00:03:40.718 CC lib/util/cpuset.o 00:03:40.718 CC lib/util/crc32.o 00:03:40.718 CC lib/util/crc16.o 00:03:40.718 CC lib/util/crc32c.o 00:03:40.718 CXX lib/trace_parser/trace.o 00:03:40.718 CC lib/dma/dma.o 00:03:40.718 CC lib/ioat/ioat.o 00:03:40.718 CC lib/vfio_user/host/vfio_user_pci.o 00:03:40.718 CC lib/vfio_user/host/vfio_user.o 00:03:40.718 CC lib/util/crc32_ieee.o 00:03:40.718 CC lib/util/crc64.o 00:03:40.718 CC lib/util/dif.o 00:03:40.718 CC lib/util/fd.o 00:03:40.718 CC lib/util/file.o 00:03:40.718 CC lib/util/hexlify.o 00:03:40.718 LIB libspdk_dma.a 00:03:40.718 CC lib/util/iov.o 00:03:40.718 LIB libspdk_ioat.a 00:03:40.718 SO libspdk_dma.so.3.0 00:03:40.718 SO libspdk_ioat.so.6.0 00:03:40.718 LIB libspdk_vfio_user.a 00:03:40.718 CC lib/util/math.o 00:03:40.718 SYMLINK libspdk_dma.so 00:03:40.718 CC lib/util/pipe.o 00:03:40.718 CC lib/util/strerror_tls.o 00:03:40.718 SO libspdk_vfio_user.so.4.0 00:03:40.718 SYMLINK libspdk_ioat.so 00:03:40.718 CC lib/util/string.o 00:03:40.718 CC lib/util/uuid.o 00:03:40.718 SYMLINK libspdk_vfio_user.so 00:03:40.718 CC lib/util/fd_group.o 00:03:40.718 CC lib/util/xor.o 00:03:40.718 CC lib/util/zipf.o 00:03:40.718 LIB libspdk_util.a 00:03:40.718 SO libspdk_util.so.8.0 00:03:40.718 SYMLINK libspdk_util.so 00:03:40.718 LIB libspdk_trace_parser.a 00:03:40.719 SO libspdk_trace_parser.so.4.0 00:03:40.977 CC lib/rdma/common.o 00:03:40.977 CC lib/rdma/rdma_verbs.o 00:03:40.977 CC lib/json/json_parse.o 00:03:40.977 CC lib/env_dpdk/env.o 00:03:40.977 CC lib/json/json_util.o 00:03:40.977 CC lib/json/json_write.o 00:03:40.977 CC lib/conf/conf.o 00:03:40.977 CC lib/vmd/vmd.o 00:03:40.977 CC lib/idxd/idxd.o 00:03:40.977 SYMLINK libspdk_trace_parser.so 00:03:40.977 CC lib/idxd/idxd_user.o 00:03:41.235 CC lib/idxd/idxd_kernel.o 00:03:41.235 CC lib/env_dpdk/memory.o 00:03:41.235 CC lib/env_dpdk/pci.o 00:03:41.235 CC lib/vmd/led.o 00:03:41.235 LIB libspdk_conf.a 00:03:41.235 LIB libspdk_rdma.a 00:03:41.235 LIB libspdk_json.a 00:03:41.235 SO libspdk_conf.so.5.0 00:03:41.235 SO libspdk_rdma.so.5.0 00:03:41.235 SO libspdk_json.so.5.1 00:03:41.235 SYMLINK libspdk_conf.so 00:03:41.235 SYMLINK libspdk_rdma.so 00:03:41.235 CC lib/env_dpdk/init.o 00:03:41.235 CC lib/env_dpdk/threads.o 00:03:41.235 CC lib/env_dpdk/pci_ioat.o 00:03:41.235 SYMLINK libspdk_json.so 00:03:41.235 CC lib/env_dpdk/pci_virtio.o 00:03:41.235 CC lib/env_dpdk/pci_vmd.o 00:03:41.493 CC lib/env_dpdk/pci_idxd.o 00:03:41.493 CC lib/env_dpdk/pci_event.o 00:03:41.493 CC lib/env_dpdk/sigbus_handler.o 00:03:41.493 CC lib/env_dpdk/pci_dpdk.o 00:03:41.493 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:41.493 LIB libspdk_idxd.a 00:03:41.493 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:41.493 SO libspdk_idxd.so.11.0 00:03:41.493 LIB libspdk_vmd.a 00:03:41.752 SO libspdk_vmd.so.5.0 00:03:41.752 SYMLINK libspdk_idxd.so 00:03:41.752 SYMLINK libspdk_vmd.so 00:03:41.752 CC lib/jsonrpc/jsonrpc_server.o 00:03:41.752 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:41.752 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:41.752 CC lib/jsonrpc/jsonrpc_client.o 00:03:42.010 LIB libspdk_jsonrpc.a 00:03:42.010 SO libspdk_jsonrpc.so.5.1 00:03:42.010 SYMLINK libspdk_jsonrpc.so 00:03:42.269 CC lib/rpc/rpc.o 00:03:42.269 LIB libspdk_env_dpdk.a 00:03:42.527 LIB libspdk_rpc.a 00:03:42.527 SO libspdk_env_dpdk.so.13.0 00:03:42.527 SO libspdk_rpc.so.5.0 00:03:42.527 SYMLINK libspdk_rpc.so 00:03:42.527 SYMLINK libspdk_env_dpdk.so 00:03:42.786 CC lib/notify/notify.o 00:03:42.786 CC lib/notify/notify_rpc.o 00:03:42.786 CC lib/trace/trace.o 00:03:42.786 CC lib/trace/trace_rpc.o 00:03:42.786 CC lib/trace/trace_flags.o 00:03:42.786 CC lib/sock/sock_rpc.o 00:03:42.786 CC lib/sock/sock.o 00:03:42.786 LIB libspdk_notify.a 00:03:42.786 SO libspdk_notify.so.5.0 00:03:43.044 SYMLINK libspdk_notify.so 00:03:43.044 LIB libspdk_trace.a 00:03:43.044 SO libspdk_trace.so.9.0 00:03:43.044 LIB libspdk_sock.a 00:03:43.044 SYMLINK libspdk_trace.so 00:03:43.044 SO libspdk_sock.so.8.0 00:03:43.303 SYMLINK libspdk_sock.so 00:03:43.303 CC lib/thread/iobuf.o 00:03:43.303 CC lib/thread/thread.o 00:03:43.303 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:43.303 CC lib/nvme/nvme_ctrlr.o 00:03:43.303 CC lib/nvme/nvme_fabric.o 00:03:43.303 CC lib/nvme/nvme_ns_cmd.o 00:03:43.303 CC lib/nvme/nvme_qpair.o 00:03:43.303 CC lib/nvme/nvme_ns.o 00:03:43.303 CC lib/nvme/nvme_pcie_common.o 00:03:43.303 CC lib/nvme/nvme_pcie.o 00:03:43.562 CC lib/nvme/nvme.o 00:03:44.127 CC lib/nvme/nvme_quirks.o 00:03:44.127 CC lib/nvme/nvme_transport.o 00:03:44.127 CC lib/nvme/nvme_discovery.o 00:03:44.127 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:44.127 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:44.385 CC lib/nvme/nvme_tcp.o 00:03:44.385 CC lib/nvme/nvme_opal.o 00:03:44.643 CC lib/nvme/nvme_io_msg.o 00:03:44.901 CC lib/nvme/nvme_poll_group.o 00:03:44.901 CC lib/nvme/nvme_zns.o 00:03:44.901 LIB libspdk_thread.a 00:03:44.901 SO libspdk_thread.so.9.0 00:03:44.901 CC lib/nvme/nvme_cuse.o 00:03:44.901 CC lib/nvme/nvme_vfio_user.o 00:03:44.901 SYMLINK libspdk_thread.so 00:03:44.901 CC lib/nvme/nvme_rdma.o 00:03:45.160 CC lib/accel/accel.o 00:03:45.160 CC lib/blob/blobstore.o 00:03:45.160 CC lib/init/json_config.o 00:03:45.418 CC lib/init/subsystem.o 00:03:45.418 CC lib/init/subsystem_rpc.o 00:03:45.418 CC lib/blob/request.o 00:03:45.675 CC lib/init/rpc.o 00:03:45.675 CC lib/virtio/virtio.o 00:03:45.675 CC lib/virtio/virtio_vhost_user.o 00:03:45.675 CC lib/virtio/virtio_vfio_user.o 00:03:45.675 LIB libspdk_init.a 00:03:45.675 CC lib/virtio/virtio_pci.o 00:03:45.675 SO libspdk_init.so.4.0 00:03:45.933 CC lib/accel/accel_rpc.o 00:03:45.933 SYMLINK libspdk_init.so 00:03:45.933 CC lib/blob/zeroes.o 00:03:45.933 CC lib/blob/blob_bs_dev.o 00:03:45.933 CC lib/accel/accel_sw.o 00:03:45.933 CC lib/event/reactor.o 00:03:45.933 CC lib/event/app.o 00:03:45.933 CC lib/event/log_rpc.o 00:03:45.933 CC lib/event/app_rpc.o 00:03:46.191 LIB libspdk_virtio.a 00:03:46.191 SO libspdk_virtio.so.6.0 00:03:46.191 CC lib/event/scheduler_static.o 00:03:46.191 SYMLINK libspdk_virtio.so 00:03:46.191 LIB libspdk_accel.a 00:03:46.191 SO libspdk_accel.so.14.0 00:03:46.450 LIB libspdk_nvme.a 00:03:46.450 SYMLINK libspdk_accel.so 00:03:46.450 LIB libspdk_event.a 00:03:46.450 SO libspdk_event.so.12.0 00:03:46.450 CC lib/bdev/bdev.o 00:03:46.450 CC lib/bdev/bdev_rpc.o 00:03:46.450 CC lib/bdev/bdev_zone.o 00:03:46.450 CC lib/bdev/part.o 00:03:46.450 SO libspdk_nvme.so.12.0 00:03:46.450 CC lib/bdev/scsi_nvme.o 00:03:46.450 SYMLINK libspdk_event.so 00:03:46.708 SYMLINK libspdk_nvme.so 00:03:48.086 LIB libspdk_blob.a 00:03:48.086 SO libspdk_blob.so.10.1 00:03:48.086 SYMLINK libspdk_blob.so 00:03:48.086 CC lib/lvol/lvol.o 00:03:48.086 CC lib/blobfs/blobfs.o 00:03:48.086 CC lib/blobfs/tree.o 00:03:49.021 LIB libspdk_blobfs.a 00:03:49.021 LIB libspdk_bdev.a 00:03:49.021 SO libspdk_blobfs.so.9.0 00:03:49.021 LIB libspdk_lvol.a 00:03:49.332 SO libspdk_lvol.so.9.1 00:03:49.332 SO libspdk_bdev.so.14.0 00:03:49.332 SYMLINK libspdk_blobfs.so 00:03:49.332 SYMLINK libspdk_lvol.so 00:03:49.332 SYMLINK libspdk_bdev.so 00:03:49.332 CC lib/ublk/ublk.o 00:03:49.332 CC lib/ublk/ublk_rpc.o 00:03:49.332 CC lib/nbd/nbd.o 00:03:49.332 CC lib/scsi/dev.o 00:03:49.332 CC lib/scsi/lun.o 00:03:49.332 CC lib/nbd/nbd_rpc.o 00:03:49.332 CC lib/scsi/port.o 00:03:49.332 CC lib/scsi/scsi.o 00:03:49.332 CC lib/nvmf/ctrlr.o 00:03:49.332 CC lib/ftl/ftl_core.o 00:03:49.590 CC lib/nvmf/ctrlr_discovery.o 00:03:49.590 CC lib/nvmf/ctrlr_bdev.o 00:03:49.590 CC lib/nvmf/subsystem.o 00:03:49.590 CC lib/nvmf/nvmf.o 00:03:49.590 CC lib/scsi/scsi_bdev.o 00:03:49.848 CC lib/scsi/scsi_pr.o 00:03:49.848 CC lib/ftl/ftl_init.o 00:03:49.848 LIB libspdk_nbd.a 00:03:49.848 SO libspdk_nbd.so.6.0 00:03:49.848 SYMLINK libspdk_nbd.so 00:03:49.848 CC lib/ftl/ftl_layout.o 00:03:50.106 CC lib/nvmf/nvmf_rpc.o 00:03:50.106 LIB libspdk_ublk.a 00:03:50.106 CC lib/scsi/scsi_rpc.o 00:03:50.106 SO libspdk_ublk.so.2.0 00:03:50.106 SYMLINK libspdk_ublk.so 00:03:50.106 CC lib/scsi/task.o 00:03:50.106 CC lib/nvmf/transport.o 00:03:50.106 CC lib/ftl/ftl_debug.o 00:03:50.106 CC lib/nvmf/tcp.o 00:03:50.365 CC lib/ftl/ftl_io.o 00:03:50.365 CC lib/nvmf/rdma.o 00:03:50.365 LIB libspdk_scsi.a 00:03:50.365 CC lib/ftl/ftl_sb.o 00:03:50.365 SO libspdk_scsi.so.8.0 00:03:50.623 CC lib/ftl/ftl_l2p.o 00:03:50.623 SYMLINK libspdk_scsi.so 00:03:50.623 CC lib/ftl/ftl_l2p_flat.o 00:03:50.623 CC lib/iscsi/conn.o 00:03:50.623 CC lib/vhost/vhost.o 00:03:50.881 CC lib/vhost/vhost_rpc.o 00:03:50.881 CC lib/vhost/vhost_scsi.o 00:03:50.881 CC lib/vhost/vhost_blk.o 00:03:50.881 CC lib/vhost/rte_vhost_user.o 00:03:50.881 CC lib/ftl/ftl_nv_cache.o 00:03:51.140 CC lib/ftl/ftl_band.o 00:03:51.398 CC lib/iscsi/init_grp.o 00:03:51.398 CC lib/ftl/ftl_band_ops.o 00:03:51.398 CC lib/iscsi/iscsi.o 00:03:51.656 CC lib/ftl/ftl_writer.o 00:03:51.656 CC lib/ftl/ftl_rq.o 00:03:51.914 CC lib/ftl/ftl_reloc.o 00:03:51.914 CC lib/iscsi/md5.o 00:03:51.914 CC lib/iscsi/param.o 00:03:51.914 CC lib/iscsi/portal_grp.o 00:03:51.914 CC lib/iscsi/tgt_node.o 00:03:51.914 CC lib/ftl/ftl_l2p_cache.o 00:03:51.914 CC lib/ftl/ftl_p2l.o 00:03:51.914 LIB libspdk_vhost.a 00:03:51.914 CC lib/iscsi/iscsi_subsystem.o 00:03:52.172 SO libspdk_vhost.so.7.1 00:03:52.172 CC lib/iscsi/iscsi_rpc.o 00:03:52.172 CC lib/iscsi/task.o 00:03:52.172 SYMLINK libspdk_vhost.so 00:03:52.172 CC lib/ftl/mngt/ftl_mngt.o 00:03:52.172 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:52.172 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:52.172 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:52.430 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:52.430 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:52.430 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:52.430 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:52.430 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:52.430 LIB libspdk_nvmf.a 00:03:52.430 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:52.430 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:52.430 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:52.688 SO libspdk_nvmf.so.17.0 00:03:52.688 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:52.688 CC lib/ftl/utils/ftl_conf.o 00:03:52.688 CC lib/ftl/utils/ftl_md.o 00:03:52.688 CC lib/ftl/utils/ftl_mempool.o 00:03:52.688 CC lib/ftl/utils/ftl_bitmap.o 00:03:52.688 CC lib/ftl/utils/ftl_property.o 00:03:52.688 SYMLINK libspdk_nvmf.so 00:03:52.688 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:52.688 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:52.946 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:52.946 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:52.946 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:52.946 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:52.946 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:52.946 LIB libspdk_iscsi.a 00:03:52.946 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:52.946 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:52.946 SO libspdk_iscsi.so.7.0 00:03:53.204 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:53.204 CC lib/ftl/base/ftl_base_dev.o 00:03:53.204 CC lib/ftl/base/ftl_base_bdev.o 00:03:53.204 CC lib/ftl/ftl_trace.o 00:03:53.204 SYMLINK libspdk_iscsi.so 00:03:53.463 LIB libspdk_ftl.a 00:03:53.720 SO libspdk_ftl.so.8.0 00:03:53.978 SYMLINK libspdk_ftl.so 00:03:54.236 CC module/env_dpdk/env_dpdk_rpc.o 00:03:54.236 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:54.236 CC module/sock/posix/posix.o 00:03:54.236 CC module/blob/bdev/blob_bdev.o 00:03:54.236 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:54.236 CC module/sock/uring/uring.o 00:03:54.236 CC module/accel/ioat/accel_ioat.o 00:03:54.236 CC module/scheduler/gscheduler/gscheduler.o 00:03:54.236 CC module/accel/dsa/accel_dsa.o 00:03:54.236 CC module/accel/error/accel_error.o 00:03:54.236 LIB libspdk_env_dpdk_rpc.a 00:03:54.236 SO libspdk_env_dpdk_rpc.so.5.0 00:03:54.494 LIB libspdk_scheduler_dpdk_governor.a 00:03:54.494 LIB libspdk_scheduler_gscheduler.a 00:03:54.494 SYMLINK libspdk_env_dpdk_rpc.so 00:03:54.494 CC module/accel/dsa/accel_dsa_rpc.o 00:03:54.494 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:54.494 SO libspdk_scheduler_gscheduler.so.3.0 00:03:54.494 CC module/accel/ioat/accel_ioat_rpc.o 00:03:54.494 LIB libspdk_scheduler_dynamic.a 00:03:54.494 CC module/accel/error/accel_error_rpc.o 00:03:54.494 SO libspdk_scheduler_dynamic.so.3.0 00:03:54.494 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:54.494 SYMLINK libspdk_scheduler_gscheduler.so 00:03:54.494 LIB libspdk_blob_bdev.a 00:03:54.495 SYMLINK libspdk_scheduler_dynamic.so 00:03:54.495 SO libspdk_blob_bdev.so.10.1 00:03:54.495 LIB libspdk_accel_dsa.a 00:03:54.495 LIB libspdk_accel_ioat.a 00:03:54.495 SO libspdk_accel_dsa.so.4.0 00:03:54.495 SYMLINK libspdk_blob_bdev.so 00:03:54.495 LIB libspdk_accel_error.a 00:03:54.495 CC module/accel/iaa/accel_iaa.o 00:03:54.495 CC module/accel/iaa/accel_iaa_rpc.o 00:03:54.753 SO libspdk_accel_ioat.so.5.0 00:03:54.753 SO libspdk_accel_error.so.1.0 00:03:54.753 SYMLINK libspdk_accel_dsa.so 00:03:54.753 SYMLINK libspdk_accel_ioat.so 00:03:54.753 SYMLINK libspdk_accel_error.so 00:03:54.753 CC module/bdev/error/vbdev_error.o 00:03:54.753 CC module/bdev/delay/vbdev_delay.o 00:03:54.753 CC module/blobfs/bdev/blobfs_bdev.o 00:03:54.753 CC module/bdev/gpt/gpt.o 00:03:54.753 CC module/bdev/lvol/vbdev_lvol.o 00:03:54.753 LIB libspdk_accel_iaa.a 00:03:54.753 CC module/bdev/malloc/bdev_malloc.o 00:03:55.011 SO libspdk_accel_iaa.so.2.0 00:03:55.011 CC module/bdev/null/bdev_null.o 00:03:55.011 SYMLINK libspdk_accel_iaa.so 00:03:55.011 LIB libspdk_sock_uring.a 00:03:55.011 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:55.011 LIB libspdk_sock_posix.a 00:03:55.011 SO libspdk_sock_uring.so.4.0 00:03:55.011 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:55.011 SO libspdk_sock_posix.so.5.0 00:03:55.011 CC module/bdev/gpt/vbdev_gpt.o 00:03:55.011 SYMLINK libspdk_sock_uring.so 00:03:55.011 CC module/bdev/error/vbdev_error_rpc.o 00:03:55.011 SYMLINK libspdk_sock_posix.so 00:03:55.269 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:55.269 CC module/bdev/nvme/bdev_nvme.o 00:03:55.269 CC module/bdev/null/bdev_null_rpc.o 00:03:55.269 LIB libspdk_blobfs_bdev.a 00:03:55.269 LIB libspdk_bdev_malloc.a 00:03:55.269 SO libspdk_blobfs_bdev.so.5.0 00:03:55.269 CC module/bdev/passthru/vbdev_passthru.o 00:03:55.269 LIB libspdk_bdev_error.a 00:03:55.269 CC module/bdev/raid/bdev_raid.o 00:03:55.269 SO libspdk_bdev_malloc.so.5.0 00:03:55.269 SO libspdk_bdev_error.so.5.0 00:03:55.269 SYMLINK libspdk_blobfs_bdev.so 00:03:55.269 LIB libspdk_bdev_gpt.a 00:03:55.269 SYMLINK libspdk_bdev_malloc.so 00:03:55.269 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:55.269 SYMLINK libspdk_bdev_error.so 00:03:55.269 LIB libspdk_bdev_delay.a 00:03:55.269 SO libspdk_bdev_gpt.so.5.0 00:03:55.527 LIB libspdk_bdev_null.a 00:03:55.527 SO libspdk_bdev_delay.so.5.0 00:03:55.527 SO libspdk_bdev_null.so.5.0 00:03:55.527 CC module/bdev/split/vbdev_split.o 00:03:55.527 SYMLINK libspdk_bdev_gpt.so 00:03:55.527 CC module/bdev/raid/bdev_raid_rpc.o 00:03:55.527 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:55.527 SYMLINK libspdk_bdev_delay.so 00:03:55.527 CC module/bdev/uring/bdev_uring.o 00:03:55.527 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:55.527 SYMLINK libspdk_bdev_null.so 00:03:55.527 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:55.527 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:55.785 LIB libspdk_bdev_lvol.a 00:03:55.785 CC module/bdev/split/vbdev_split_rpc.o 00:03:55.785 SO libspdk_bdev_lvol.so.5.0 00:03:55.785 LIB libspdk_bdev_passthru.a 00:03:55.785 SO libspdk_bdev_passthru.so.5.0 00:03:55.785 CC module/bdev/aio/bdev_aio.o 00:03:55.785 SYMLINK libspdk_bdev_lvol.so 00:03:55.785 CC module/bdev/ftl/bdev_ftl.o 00:03:55.785 LIB libspdk_bdev_zone_block.a 00:03:55.785 SYMLINK libspdk_bdev_passthru.so 00:03:55.785 CC module/bdev/aio/bdev_aio_rpc.o 00:03:55.785 SO libspdk_bdev_zone_block.so.5.0 00:03:55.785 CC module/bdev/uring/bdev_uring_rpc.o 00:03:55.785 LIB libspdk_bdev_split.a 00:03:56.044 CC module/bdev/iscsi/bdev_iscsi.o 00:03:56.044 SO libspdk_bdev_split.so.5.0 00:03:56.044 SYMLINK libspdk_bdev_zone_block.so 00:03:56.044 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:56.044 CC module/bdev/nvme/nvme_rpc.o 00:03:56.044 SYMLINK libspdk_bdev_split.so 00:03:56.044 CC module/bdev/raid/bdev_raid_sb.o 00:03:56.044 LIB libspdk_bdev_uring.a 00:03:56.044 CC module/bdev/raid/raid0.o 00:03:56.044 SO libspdk_bdev_uring.so.5.0 00:03:56.302 CC module/bdev/nvme/bdev_mdns_client.o 00:03:56.302 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:56.302 SYMLINK libspdk_bdev_uring.so 00:03:56.302 CC module/bdev/nvme/vbdev_opal.o 00:03:56.302 LIB libspdk_bdev_aio.a 00:03:56.302 SO libspdk_bdev_aio.so.5.0 00:03:56.302 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:56.302 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:56.302 CC module/bdev/raid/raid1.o 00:03:56.302 LIB libspdk_bdev_iscsi.a 00:03:56.302 SYMLINK libspdk_bdev_aio.so 00:03:56.302 CC module/bdev/raid/concat.o 00:03:56.302 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:56.302 SO libspdk_bdev_iscsi.so.5.0 00:03:56.302 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:56.302 LIB libspdk_bdev_ftl.a 00:03:56.560 SYMLINK libspdk_bdev_iscsi.so 00:03:56.560 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:56.560 SO libspdk_bdev_ftl.so.5.0 00:03:56.560 SYMLINK libspdk_bdev_ftl.so 00:03:56.560 LIB libspdk_bdev_raid.a 00:03:56.560 SO libspdk_bdev_raid.so.5.0 00:03:56.818 SYMLINK libspdk_bdev_raid.so 00:03:56.818 LIB libspdk_bdev_virtio.a 00:03:56.818 SO libspdk_bdev_virtio.so.5.0 00:03:57.077 SYMLINK libspdk_bdev_virtio.so 00:03:57.643 LIB libspdk_bdev_nvme.a 00:03:57.643 SO libspdk_bdev_nvme.so.6.0 00:03:57.643 SYMLINK libspdk_bdev_nvme.so 00:03:58.211 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:58.211 CC module/event/subsystems/vmd/vmd.o 00:03:58.211 CC module/event/subsystems/sock/sock.o 00:03:58.211 CC module/event/subsystems/scheduler/scheduler.o 00:03:58.211 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:58.211 CC module/event/subsystems/iobuf/iobuf.o 00:03:58.211 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:58.211 LIB libspdk_event_sock.a 00:03:58.211 LIB libspdk_event_iobuf.a 00:03:58.211 LIB libspdk_event_vhost_blk.a 00:03:58.211 LIB libspdk_event_vmd.a 00:03:58.211 LIB libspdk_event_scheduler.a 00:03:58.211 SO libspdk_event_sock.so.4.0 00:03:58.211 SO libspdk_event_iobuf.so.2.0 00:03:58.211 SO libspdk_event_vhost_blk.so.2.0 00:03:58.211 SO libspdk_event_vmd.so.5.0 00:03:58.211 SO libspdk_event_scheduler.so.3.0 00:03:58.211 SYMLINK libspdk_event_iobuf.so 00:03:58.211 SYMLINK libspdk_event_sock.so 00:03:58.211 SYMLINK libspdk_event_scheduler.so 00:03:58.211 SYMLINK libspdk_event_vhost_blk.so 00:03:58.469 SYMLINK libspdk_event_vmd.so 00:03:58.469 CC module/event/subsystems/accel/accel.o 00:03:58.729 LIB libspdk_event_accel.a 00:03:58.729 SO libspdk_event_accel.so.5.0 00:03:58.729 SYMLINK libspdk_event_accel.so 00:03:58.987 CC module/event/subsystems/bdev/bdev.o 00:03:59.246 LIB libspdk_event_bdev.a 00:03:59.246 SO libspdk_event_bdev.so.5.0 00:03:59.246 SYMLINK libspdk_event_bdev.so 00:03:59.504 CC module/event/subsystems/ublk/ublk.o 00:03:59.504 CC module/event/subsystems/scsi/scsi.o 00:03:59.504 CC module/event/subsystems/nbd/nbd.o 00:03:59.504 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:59.504 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:59.504 LIB libspdk_event_ublk.a 00:03:59.504 LIB libspdk_event_nbd.a 00:03:59.504 LIB libspdk_event_scsi.a 00:03:59.832 SO libspdk_event_ublk.so.2.0 00:03:59.832 SO libspdk_event_nbd.so.5.0 00:03:59.832 SO libspdk_event_scsi.so.5.0 00:03:59.832 SYMLINK libspdk_event_ublk.so 00:03:59.832 SYMLINK libspdk_event_nbd.so 00:03:59.832 LIB libspdk_event_nvmf.a 00:03:59.832 SYMLINK libspdk_event_scsi.so 00:03:59.832 SO libspdk_event_nvmf.so.5.0 00:03:59.832 SYMLINK libspdk_event_nvmf.so 00:03:59.832 CC module/event/subsystems/iscsi/iscsi.o 00:03:59.832 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:00.091 LIB libspdk_event_vhost_scsi.a 00:04:00.091 SO libspdk_event_vhost_scsi.so.2.0 00:04:00.091 LIB libspdk_event_iscsi.a 00:04:00.091 SO libspdk_event_iscsi.so.5.0 00:04:00.091 SYMLINK libspdk_event_vhost_scsi.so 00:04:00.091 SYMLINK libspdk_event_iscsi.so 00:04:00.349 SO libspdk.so.5.0 00:04:00.350 SYMLINK libspdk.so 00:04:00.350 CXX app/trace/trace.o 00:04:00.350 CC app/trace_record/trace_record.o 00:04:00.608 TEST_HEADER include/spdk/accel.h 00:04:00.608 TEST_HEADER include/spdk/accel_module.h 00:04:00.608 TEST_HEADER include/spdk/assert.h 00:04:00.608 TEST_HEADER include/spdk/barrier.h 00:04:00.608 TEST_HEADER include/spdk/base64.h 00:04:00.608 TEST_HEADER include/spdk/bdev.h 00:04:00.608 TEST_HEADER include/spdk/bdev_module.h 00:04:00.608 TEST_HEADER include/spdk/bdev_zone.h 00:04:00.608 TEST_HEADER include/spdk/bit_array.h 00:04:00.608 TEST_HEADER include/spdk/bit_pool.h 00:04:00.608 TEST_HEADER include/spdk/blob_bdev.h 00:04:00.608 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:00.608 TEST_HEADER include/spdk/blobfs.h 00:04:00.608 CC app/nvmf_tgt/nvmf_main.o 00:04:00.608 TEST_HEADER include/spdk/blob.h 00:04:00.608 TEST_HEADER include/spdk/conf.h 00:04:00.608 TEST_HEADER include/spdk/config.h 00:04:00.608 TEST_HEADER include/spdk/cpuset.h 00:04:00.608 TEST_HEADER include/spdk/crc16.h 00:04:00.608 TEST_HEADER include/spdk/crc32.h 00:04:00.608 TEST_HEADER include/spdk/crc64.h 00:04:00.608 TEST_HEADER include/spdk/dif.h 00:04:00.608 TEST_HEADER include/spdk/dma.h 00:04:00.608 TEST_HEADER include/spdk/endian.h 00:04:00.608 TEST_HEADER include/spdk/env_dpdk.h 00:04:00.608 TEST_HEADER include/spdk/env.h 00:04:00.608 TEST_HEADER include/spdk/event.h 00:04:00.608 TEST_HEADER include/spdk/fd_group.h 00:04:00.608 TEST_HEADER include/spdk/fd.h 00:04:00.608 TEST_HEADER include/spdk/file.h 00:04:00.608 TEST_HEADER include/spdk/ftl.h 00:04:00.608 TEST_HEADER include/spdk/gpt_spec.h 00:04:00.608 TEST_HEADER include/spdk/hexlify.h 00:04:00.608 CC examples/accel/perf/accel_perf.o 00:04:00.608 TEST_HEADER include/spdk/histogram_data.h 00:04:00.608 CC test/bdev/bdevio/bdevio.o 00:04:00.608 TEST_HEADER include/spdk/idxd.h 00:04:00.608 CC test/blobfs/mkfs/mkfs.o 00:04:00.608 TEST_HEADER include/spdk/idxd_spec.h 00:04:00.608 TEST_HEADER include/spdk/init.h 00:04:00.608 TEST_HEADER include/spdk/ioat.h 00:04:00.608 TEST_HEADER include/spdk/ioat_spec.h 00:04:00.608 TEST_HEADER include/spdk/iscsi_spec.h 00:04:00.608 TEST_HEADER include/spdk/json.h 00:04:00.609 TEST_HEADER include/spdk/jsonrpc.h 00:04:00.609 TEST_HEADER include/spdk/likely.h 00:04:00.609 TEST_HEADER include/spdk/log.h 00:04:00.609 TEST_HEADER include/spdk/lvol.h 00:04:00.609 TEST_HEADER include/spdk/memory.h 00:04:00.609 TEST_HEADER include/spdk/mmio.h 00:04:00.609 TEST_HEADER include/spdk/nbd.h 00:04:00.609 CC test/accel/dif/dif.o 00:04:00.609 TEST_HEADER include/spdk/notify.h 00:04:00.609 TEST_HEADER include/spdk/nvme.h 00:04:00.609 TEST_HEADER include/spdk/nvme_intel.h 00:04:00.609 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:00.609 CC test/app/bdev_svc/bdev_svc.o 00:04:00.609 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:00.609 TEST_HEADER include/spdk/nvme_spec.h 00:04:00.609 TEST_HEADER include/spdk/nvme_zns.h 00:04:00.609 CC test/dma/test_dma/test_dma.o 00:04:00.609 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:00.609 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:00.609 TEST_HEADER include/spdk/nvmf.h 00:04:00.609 TEST_HEADER include/spdk/nvmf_spec.h 00:04:00.609 TEST_HEADER include/spdk/nvmf_transport.h 00:04:00.609 TEST_HEADER include/spdk/opal.h 00:04:00.609 TEST_HEADER include/spdk/opal_spec.h 00:04:00.609 TEST_HEADER include/spdk/pci_ids.h 00:04:00.609 TEST_HEADER include/spdk/pipe.h 00:04:00.609 TEST_HEADER include/spdk/queue.h 00:04:00.609 TEST_HEADER include/spdk/reduce.h 00:04:00.609 TEST_HEADER include/spdk/rpc.h 00:04:00.609 TEST_HEADER include/spdk/scheduler.h 00:04:00.609 TEST_HEADER include/spdk/scsi.h 00:04:00.609 TEST_HEADER include/spdk/scsi_spec.h 00:04:00.609 TEST_HEADER include/spdk/sock.h 00:04:00.609 TEST_HEADER include/spdk/stdinc.h 00:04:00.609 TEST_HEADER include/spdk/string.h 00:04:00.609 TEST_HEADER include/spdk/thread.h 00:04:00.609 TEST_HEADER include/spdk/trace.h 00:04:00.609 TEST_HEADER include/spdk/trace_parser.h 00:04:00.609 TEST_HEADER include/spdk/tree.h 00:04:00.609 TEST_HEADER include/spdk/ublk.h 00:04:00.609 TEST_HEADER include/spdk/util.h 00:04:00.609 TEST_HEADER include/spdk/uuid.h 00:04:00.609 LINK spdk_trace_record 00:04:00.609 TEST_HEADER include/spdk/version.h 00:04:00.609 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:00.609 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:00.609 TEST_HEADER include/spdk/vhost.h 00:04:00.609 TEST_HEADER include/spdk/vmd.h 00:04:00.609 TEST_HEADER include/spdk/xor.h 00:04:00.866 TEST_HEADER include/spdk/zipf.h 00:04:00.866 CXX test/cpp_headers/accel.o 00:04:00.866 LINK nvmf_tgt 00:04:00.866 LINK mkfs 00:04:00.866 LINK bdev_svc 00:04:00.866 LINK spdk_trace 00:04:00.866 CXX test/cpp_headers/accel_module.o 00:04:01.124 CXX test/cpp_headers/assert.o 00:04:01.124 LINK test_dma 00:04:01.124 CXX test/cpp_headers/barrier.o 00:04:01.124 CC test/env/mem_callbacks/mem_callbacks.o 00:04:01.124 LINK bdevio 00:04:01.124 LINK dif 00:04:01.125 LINK accel_perf 00:04:01.125 CC test/env/vtophys/vtophys.o 00:04:01.125 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:01.125 CXX test/cpp_headers/base64.o 00:04:01.382 CC app/iscsi_tgt/iscsi_tgt.o 00:04:01.382 LINK mem_callbacks 00:04:01.382 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:01.382 CC test/app/histogram_perf/histogram_perf.o 00:04:01.382 CC test/app/jsoncat/jsoncat.o 00:04:01.382 LINK vtophys 00:04:01.382 LINK env_dpdk_post_init 00:04:01.382 CXX test/cpp_headers/bdev.o 00:04:01.382 CXX test/cpp_headers/bdev_module.o 00:04:01.382 CC examples/bdev/hello_world/hello_bdev.o 00:04:01.639 LINK iscsi_tgt 00:04:01.639 CC examples/blob/hello_world/hello_blob.o 00:04:01.639 LINK histogram_perf 00:04:01.639 LINK jsoncat 00:04:01.639 CXX test/cpp_headers/bdev_zone.o 00:04:01.639 CC test/event/event_perf/event_perf.o 00:04:01.640 CXX test/cpp_headers/bit_array.o 00:04:01.640 CC test/env/memory/memory_ut.o 00:04:01.640 LINK hello_bdev 00:04:01.897 LINK nvme_fuzz 00:04:01.897 LINK hello_blob 00:04:01.897 CC test/lvol/esnap/esnap.o 00:04:01.897 CC app/spdk_tgt/spdk_tgt.o 00:04:01.897 CC test/nvme/aer/aer.o 00:04:01.897 LINK event_perf 00:04:01.897 CXX test/cpp_headers/bit_pool.o 00:04:01.897 CC test/nvme/reset/reset.o 00:04:02.155 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:02.155 LINK spdk_tgt 00:04:02.155 CC examples/bdev/bdevperf/bdevperf.o 00:04:02.155 CC examples/blob/cli/blobcli.o 00:04:02.155 CC test/event/reactor/reactor.o 00:04:02.155 CXX test/cpp_headers/blob_bdev.o 00:04:02.155 LINK aer 00:04:02.155 LINK reset 00:04:02.155 LINK memory_ut 00:04:02.413 LINK reactor 00:04:02.413 CXX test/cpp_headers/blobfs_bdev.o 00:04:02.413 CC app/spdk_lspci/spdk_lspci.o 00:04:02.413 CC app/spdk_nvme_perf/perf.o 00:04:02.671 CC test/nvme/sgl/sgl.o 00:04:02.671 CXX test/cpp_headers/blobfs.o 00:04:02.671 CC test/event/reactor_perf/reactor_perf.o 00:04:02.671 CC test/env/pci/pci_ut.o 00:04:02.671 LINK spdk_lspci 00:04:02.671 LINK blobcli 00:04:02.671 LINK reactor_perf 00:04:02.671 CXX test/cpp_headers/blob.o 00:04:02.929 CC test/rpc_client/rpc_client_test.o 00:04:02.929 LINK sgl 00:04:02.929 CXX test/cpp_headers/conf.o 00:04:02.929 LINK bdevperf 00:04:02.929 CC test/event/app_repeat/app_repeat.o 00:04:02.929 LINK pci_ut 00:04:02.929 CC test/thread/poller_perf/poller_perf.o 00:04:03.187 LINK rpc_client_test 00:04:03.187 CC test/nvme/e2edp/nvme_dp.o 00:04:03.187 LINK app_repeat 00:04:03.187 LINK poller_perf 00:04:03.187 CXX test/cpp_headers/config.o 00:04:03.187 CXX test/cpp_headers/cpuset.o 00:04:03.187 CC examples/ioat/perf/perf.o 00:04:03.444 CC test/nvme/overhead/overhead.o 00:04:03.444 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:03.444 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:03.444 LINK spdk_nvme_perf 00:04:03.444 CXX test/cpp_headers/crc16.o 00:04:03.444 CC test/event/scheduler/scheduler.o 00:04:03.444 LINK nvme_dp 00:04:03.444 LINK ioat_perf 00:04:03.444 CC examples/ioat/verify/verify.o 00:04:03.702 CXX test/cpp_headers/crc32.o 00:04:03.702 LINK overhead 00:04:03.702 CC app/spdk_nvme_identify/identify.o 00:04:03.702 LINK scheduler 00:04:03.702 CC test/nvme/err_injection/err_injection.o 00:04:03.702 CC app/spdk_nvme_discover/discovery_aer.o 00:04:03.702 LINK vhost_fuzz 00:04:03.702 CXX test/cpp_headers/crc64.o 00:04:03.702 LINK verify 00:04:03.960 LINK iscsi_fuzz 00:04:03.960 CC test/nvme/startup/startup.o 00:04:03.960 LINK err_injection 00:04:03.960 CXX test/cpp_headers/dif.o 00:04:03.960 CC test/nvme/reserve/reserve.o 00:04:03.960 LINK spdk_nvme_discover 00:04:03.960 CC test/nvme/simple_copy/simple_copy.o 00:04:03.960 LINK startup 00:04:04.218 CC examples/nvme/hello_world/hello_world.o 00:04:04.218 CXX test/cpp_headers/dma.o 00:04:04.218 CC examples/nvme/reconnect/reconnect.o 00:04:04.218 CC test/app/stub/stub.o 00:04:04.218 LINK reserve 00:04:04.218 CC app/spdk_top/spdk_top.o 00:04:04.218 LINK simple_copy 00:04:04.218 CXX test/cpp_headers/endian.o 00:04:04.476 LINK hello_world 00:04:04.476 LINK stub 00:04:04.476 CC app/vhost/vhost.o 00:04:04.476 CXX test/cpp_headers/env_dpdk.o 00:04:04.476 LINK spdk_nvme_identify 00:04:04.476 CXX test/cpp_headers/env.o 00:04:04.476 CC test/nvme/connect_stress/connect_stress.o 00:04:04.476 LINK reconnect 00:04:04.476 CXX test/cpp_headers/event.o 00:04:04.476 CXX test/cpp_headers/fd_group.o 00:04:04.476 CC test/nvme/boot_partition/boot_partition.o 00:04:04.476 LINK vhost 00:04:04.741 LINK connect_stress 00:04:04.741 CXX test/cpp_headers/fd.o 00:04:04.741 LINK boot_partition 00:04:04.741 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:04.741 CC test/nvme/compliance/nvme_compliance.o 00:04:04.741 CC examples/sock/hello_world/hello_sock.o 00:04:04.741 CC app/spdk_dd/spdk_dd.o 00:04:05.001 CC examples/vmd/lsvmd/lsvmd.o 00:04:05.001 CXX test/cpp_headers/file.o 00:04:05.001 CC examples/vmd/led/led.o 00:04:05.001 CC test/nvme/fused_ordering/fused_ordering.o 00:04:05.001 LINK hello_sock 00:04:05.001 LINK lsvmd 00:04:05.001 LINK nvme_compliance 00:04:05.001 CXX test/cpp_headers/ftl.o 00:04:05.259 LINK led 00:04:05.259 LINK spdk_top 00:04:05.259 LINK spdk_dd 00:04:05.259 LINK fused_ordering 00:04:05.259 LINK nvme_manage 00:04:05.259 CC examples/nvme/arbitration/arbitration.o 00:04:05.259 CXX test/cpp_headers/gpt_spec.o 00:04:05.259 CC examples/nvme/hotplug/hotplug.o 00:04:05.259 CC app/fio/nvme/fio_plugin.o 00:04:05.259 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:05.517 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:05.517 CC examples/nvmf/nvmf/nvmf.o 00:04:05.517 CXX test/cpp_headers/hexlify.o 00:04:05.517 CC examples/util/zipf/zipf.o 00:04:05.517 LINK cmb_copy 00:04:05.517 LINK hotplug 00:04:05.517 CC examples/thread/thread/thread_ex.o 00:04:05.517 LINK arbitration 00:04:05.775 CXX test/cpp_headers/histogram_data.o 00:04:05.775 LINK doorbell_aers 00:04:05.775 LINK zipf 00:04:05.775 LINK nvmf 00:04:05.775 CXX test/cpp_headers/idxd.o 00:04:05.775 CC examples/nvme/abort/abort.o 00:04:06.032 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:06.032 LINK thread 00:04:06.032 CC examples/idxd/perf/perf.o 00:04:06.032 CC test/nvme/fdp/fdp.o 00:04:06.032 CC test/nvme/cuse/cuse.o 00:04:06.032 LINK spdk_nvme 00:04:06.032 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:06.032 CXX test/cpp_headers/idxd_spec.o 00:04:06.032 LINK interrupt_tgt 00:04:06.032 CXX test/cpp_headers/init.o 00:04:06.290 CC app/fio/bdev/fio_plugin.o 00:04:06.291 CXX test/cpp_headers/ioat.o 00:04:06.291 LINK pmr_persistence 00:04:06.291 LINK fdp 00:04:06.291 LINK abort 00:04:06.291 LINK idxd_perf 00:04:06.291 CXX test/cpp_headers/ioat_spec.o 00:04:06.291 CXX test/cpp_headers/iscsi_spec.o 00:04:06.291 CXX test/cpp_headers/json.o 00:04:06.548 CXX test/cpp_headers/jsonrpc.o 00:04:06.548 CXX test/cpp_headers/likely.o 00:04:06.548 CXX test/cpp_headers/log.o 00:04:06.548 CXX test/cpp_headers/lvol.o 00:04:06.548 CXX test/cpp_headers/memory.o 00:04:06.548 CXX test/cpp_headers/mmio.o 00:04:06.548 CXX test/cpp_headers/nbd.o 00:04:06.548 CXX test/cpp_headers/notify.o 00:04:06.548 CXX test/cpp_headers/nvme.o 00:04:06.548 CXX test/cpp_headers/nvme_intel.o 00:04:06.548 CXX test/cpp_headers/nvme_ocssd.o 00:04:06.548 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:06.548 CXX test/cpp_headers/nvme_spec.o 00:04:06.548 LINK esnap 00:04:06.548 CXX test/cpp_headers/nvme_zns.o 00:04:06.807 LINK spdk_bdev 00:04:06.807 CXX test/cpp_headers/nvmf_cmd.o 00:04:06.807 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:06.807 CXX test/cpp_headers/nvmf.o 00:04:06.807 CXX test/cpp_headers/nvmf_spec.o 00:04:06.807 CXX test/cpp_headers/nvmf_transport.o 00:04:06.807 CXX test/cpp_headers/opal.o 00:04:06.807 CXX test/cpp_headers/opal_spec.o 00:04:06.807 CXX test/cpp_headers/pci_ids.o 00:04:07.065 CXX test/cpp_headers/pipe.o 00:04:07.065 CXX test/cpp_headers/queue.o 00:04:07.065 CXX test/cpp_headers/reduce.o 00:04:07.065 CXX test/cpp_headers/rpc.o 00:04:07.065 CXX test/cpp_headers/scheduler.o 00:04:07.065 LINK cuse 00:04:07.065 CXX test/cpp_headers/scsi.o 00:04:07.065 CXX test/cpp_headers/scsi_spec.o 00:04:07.065 CXX test/cpp_headers/sock.o 00:04:07.065 CXX test/cpp_headers/stdinc.o 00:04:07.065 CXX test/cpp_headers/string.o 00:04:07.065 CXX test/cpp_headers/thread.o 00:04:07.065 CXX test/cpp_headers/trace.o 00:04:07.065 CXX test/cpp_headers/trace_parser.o 00:04:07.065 CXX test/cpp_headers/tree.o 00:04:07.323 CXX test/cpp_headers/ublk.o 00:04:07.323 CXX test/cpp_headers/util.o 00:04:07.323 CXX test/cpp_headers/uuid.o 00:04:07.323 CXX test/cpp_headers/version.o 00:04:07.323 CXX test/cpp_headers/vfio_user_pci.o 00:04:07.323 CXX test/cpp_headers/vfio_user_spec.o 00:04:07.323 CXX test/cpp_headers/vhost.o 00:04:07.323 CXX test/cpp_headers/vmd.o 00:04:07.323 CXX test/cpp_headers/xor.o 00:04:07.323 CXX test/cpp_headers/zipf.o 00:04:07.581 00:04:07.581 real 0m50.729s 00:04:07.581 user 4m58.446s 00:04:07.581 sys 0m57.185s 00:04:07.581 05:44:29 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:04:07.581 05:44:29 -- common/autotest_common.sh@10 -- $ set +x 00:04:07.581 ************************************ 00:04:07.581 END TEST make 00:04:07.581 ************************************ 00:04:07.581 05:44:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:07.581 05:44:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:07.581 05:44:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:07.840 05:44:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:07.840 05:44:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:07.840 05:44:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:07.840 05:44:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:07.840 05:44:29 -- scripts/common.sh@335 -- # IFS=.-: 00:04:07.840 05:44:29 -- scripts/common.sh@335 -- # read -ra ver1 00:04:07.840 05:44:29 -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.840 05:44:29 -- scripts/common.sh@336 -- # read -ra ver2 00:04:07.840 05:44:29 -- scripts/common.sh@337 -- # local 'op=<' 00:04:07.840 05:44:29 -- scripts/common.sh@339 -- # ver1_l=2 00:04:07.840 05:44:29 -- scripts/common.sh@340 -- # ver2_l=1 00:04:07.840 05:44:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:07.840 05:44:29 -- scripts/common.sh@343 -- # case "$op" in 00:04:07.840 05:44:29 -- scripts/common.sh@344 -- # : 1 00:04:07.840 05:44:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:07.840 05:44:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.840 05:44:29 -- scripts/common.sh@364 -- # decimal 1 00:04:07.840 05:44:29 -- scripts/common.sh@352 -- # local d=1 00:04:07.840 05:44:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.840 05:44:29 -- scripts/common.sh@354 -- # echo 1 00:04:07.840 05:44:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:07.840 05:44:29 -- scripts/common.sh@365 -- # decimal 2 00:04:07.840 05:44:29 -- scripts/common.sh@352 -- # local d=2 00:04:07.840 05:44:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.840 05:44:29 -- scripts/common.sh@354 -- # echo 2 00:04:07.840 05:44:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:07.840 05:44:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:07.840 05:44:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:07.840 05:44:29 -- scripts/common.sh@367 -- # return 0 00:04:07.840 05:44:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.840 05:44:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:07.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.840 --rc genhtml_branch_coverage=1 00:04:07.840 --rc genhtml_function_coverage=1 00:04:07.840 --rc genhtml_legend=1 00:04:07.840 --rc geninfo_all_blocks=1 00:04:07.840 --rc geninfo_unexecuted_blocks=1 00:04:07.840 00:04:07.840 ' 00:04:07.840 05:44:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:07.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.840 --rc genhtml_branch_coverage=1 00:04:07.840 --rc genhtml_function_coverage=1 00:04:07.840 --rc genhtml_legend=1 00:04:07.840 --rc geninfo_all_blocks=1 00:04:07.840 --rc geninfo_unexecuted_blocks=1 00:04:07.840 00:04:07.840 ' 00:04:07.840 05:44:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:07.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.840 --rc genhtml_branch_coverage=1 00:04:07.840 --rc genhtml_function_coverage=1 00:04:07.840 --rc genhtml_legend=1 00:04:07.840 --rc geninfo_all_blocks=1 00:04:07.840 --rc geninfo_unexecuted_blocks=1 00:04:07.840 00:04:07.840 ' 00:04:07.840 05:44:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:07.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.840 --rc genhtml_branch_coverage=1 00:04:07.840 --rc genhtml_function_coverage=1 00:04:07.840 --rc genhtml_legend=1 00:04:07.840 --rc geninfo_all_blocks=1 00:04:07.840 --rc geninfo_unexecuted_blocks=1 00:04:07.840 00:04:07.840 ' 00:04:07.841 05:44:29 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:07.841 05:44:29 -- nvmf/common.sh@7 -- # uname -s 00:04:07.841 05:44:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:07.841 05:44:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:07.841 05:44:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:07.841 05:44:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:07.841 05:44:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:07.841 05:44:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:07.841 05:44:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:07.841 05:44:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:07.841 05:44:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:07.841 05:44:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:07.841 05:44:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:04:07.841 05:44:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:04:07.841 05:44:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:07.841 05:44:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:07.841 05:44:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:07.841 05:44:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:07.841 05:44:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:07.841 05:44:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:07.841 05:44:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:07.841 05:44:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.841 05:44:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.841 05:44:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.841 05:44:29 -- paths/export.sh@5 -- # export PATH 00:04:07.841 05:44:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.841 05:44:29 -- nvmf/common.sh@46 -- # : 0 00:04:07.841 05:44:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:07.841 05:44:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:07.841 05:44:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:07.841 05:44:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:07.841 05:44:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:07.841 05:44:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:07.841 05:44:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:07.841 05:44:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:07.841 05:44:29 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:07.841 05:44:29 -- spdk/autotest.sh@32 -- # uname -s 00:04:07.841 05:44:29 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:07.841 05:44:29 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:07.841 05:44:29 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:07.841 05:44:29 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:07.841 05:44:29 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:07.841 05:44:29 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:07.841 05:44:29 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:07.841 05:44:29 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:07.841 05:44:29 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:07.841 05:44:29 -- spdk/autotest.sh@48 -- # udevadm_pid=59771 00:04:07.841 05:44:29 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:07.841 05:44:29 -- spdk/autotest.sh@54 -- # echo 59796 00:04:07.841 05:44:29 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:07.841 05:44:29 -- spdk/autotest.sh@56 -- # echo 59799 00:04:07.841 05:44:29 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:07.841 05:44:29 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:07.841 05:44:29 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:07.841 05:44:29 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:07.841 05:44:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:07.841 05:44:29 -- common/autotest_common.sh@10 -- # set +x 00:04:07.841 05:44:29 -- spdk/autotest.sh@70 -- # create_test_list 00:04:07.841 05:44:29 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:07.841 05:44:29 -- common/autotest_common.sh@10 -- # set +x 00:04:07.841 05:44:29 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:07.841 05:44:29 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:07.841 05:44:29 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:07.841 05:44:29 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:07.841 05:44:29 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:07.841 05:44:29 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:07.841 05:44:29 -- common/autotest_common.sh@1450 -- # uname 00:04:07.841 05:44:29 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:04:07.841 05:44:29 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:07.841 05:44:29 -- common/autotest_common.sh@1470 -- # uname 00:04:07.841 05:44:29 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:04:07.841 05:44:29 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:04:07.841 05:44:29 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:07.841 lcov: LCOV version 1.15 00:04:07.841 05:44:29 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:17.850 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:17.850 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:17.850 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:17.850 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:17.850 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:17.850 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:39.786 05:44:57 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:04:39.786 05:44:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:39.786 05:44:57 -- common/autotest_common.sh@10 -- # set +x 00:04:39.786 05:44:57 -- spdk/autotest.sh@89 -- # rm -f 00:04:39.786 05:44:57 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:39.786 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:39.786 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:39.786 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:39.786 05:44:58 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:04:39.786 05:44:58 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:39.786 05:44:58 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:39.786 05:44:58 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:39.786 05:44:58 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:39.786 05:44:58 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:39.786 05:44:58 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:39.786 05:44:58 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:39.786 05:44:58 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:39.786 05:44:58 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:39.786 05:44:58 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:39.786 05:44:58 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:39.786 05:44:58 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:39.786 05:44:58 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:39.786 05:44:58 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:39.786 05:44:58 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:39.786 05:44:58 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:39.786 05:44:58 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:39.786 05:44:58 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:39.786 05:44:58 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:39.786 05:44:58 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:39.786 05:44:58 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:39.786 05:44:58 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:39.786 05:44:58 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:39.786 05:44:58 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:04:39.786 05:44:58 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:04:39.786 05:44:58 -- spdk/autotest.sh@108 -- # grep -v p 00:04:39.786 05:44:58 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:39.786 05:44:58 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:39.786 05:44:58 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:04:39.786 05:44:58 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:39.786 05:44:58 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:39.786 No valid GPT data, bailing 00:04:39.786 05:44:58 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:39.786 05:44:58 -- scripts/common.sh@393 -- # pt= 00:04:39.786 05:44:58 -- scripts/common.sh@394 -- # return 1 00:04:39.786 05:44:58 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:39.786 1+0 records in 00:04:39.786 1+0 records out 00:04:39.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00645186 s, 163 MB/s 00:04:39.786 05:44:58 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:39.786 05:44:58 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:39.786 05:44:58 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:04:39.786 05:44:58 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:39.786 05:44:58 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:39.786 No valid GPT data, bailing 00:04:39.786 05:44:58 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:39.786 05:44:58 -- scripts/common.sh@393 -- # pt= 00:04:39.786 05:44:58 -- scripts/common.sh@394 -- # return 1 00:04:39.786 05:44:58 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:39.786 1+0 records in 00:04:39.786 1+0 records out 00:04:39.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00408287 s, 257 MB/s 00:04:39.786 05:44:58 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:39.786 05:44:58 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:39.786 05:44:58 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:04:39.786 05:44:58 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:04:39.786 05:44:58 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:39.786 No valid GPT data, bailing 00:04:39.786 05:44:58 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:39.786 05:44:58 -- scripts/common.sh@393 -- # pt= 00:04:39.786 05:44:58 -- scripts/common.sh@394 -- # return 1 00:04:39.786 05:44:58 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:39.786 1+0 records in 00:04:39.786 1+0 records out 00:04:39.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00465756 s, 225 MB/s 00:04:39.786 05:44:58 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:39.786 05:44:58 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:39.786 05:44:58 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:04:39.786 05:44:58 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:04:39.786 05:44:58 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:39.786 No valid GPT data, bailing 00:04:39.786 05:44:58 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:39.786 05:44:58 -- scripts/common.sh@393 -- # pt= 00:04:39.786 05:44:58 -- scripts/common.sh@394 -- # return 1 00:04:39.786 05:44:58 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:39.786 1+0 records in 00:04:39.786 1+0 records out 00:04:39.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450598 s, 233 MB/s 00:04:39.786 05:44:58 -- spdk/autotest.sh@116 -- # sync 00:04:39.786 05:44:59 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:39.786 05:44:59 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:39.786 05:44:59 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:39.786 05:45:00 -- spdk/autotest.sh@122 -- # uname -s 00:04:39.786 05:45:00 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:04:39.786 05:45:00 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:39.786 05:45:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:39.786 05:45:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:39.786 05:45:00 -- common/autotest_common.sh@10 -- # set +x 00:04:39.786 ************************************ 00:04:39.786 START TEST setup.sh 00:04:39.786 ************************************ 00:04:39.786 05:45:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:39.786 * Looking for test storage... 00:04:39.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:39.786 05:45:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:39.786 05:45:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:39.786 05:45:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:39.786 05:45:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:39.786 05:45:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:39.786 05:45:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:39.786 05:45:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:39.786 05:45:01 -- scripts/common.sh@335 -- # IFS=.-: 00:04:39.786 05:45:01 -- scripts/common.sh@335 -- # read -ra ver1 00:04:39.786 05:45:01 -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.786 05:45:01 -- scripts/common.sh@336 -- # read -ra ver2 00:04:39.786 05:45:01 -- scripts/common.sh@337 -- # local 'op=<' 00:04:39.786 05:45:01 -- scripts/common.sh@339 -- # ver1_l=2 00:04:39.786 05:45:01 -- scripts/common.sh@340 -- # ver2_l=1 00:04:39.786 05:45:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:39.786 05:45:01 -- scripts/common.sh@343 -- # case "$op" in 00:04:39.786 05:45:01 -- scripts/common.sh@344 -- # : 1 00:04:39.786 05:45:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:39.786 05:45:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.786 05:45:01 -- scripts/common.sh@364 -- # decimal 1 00:04:39.786 05:45:01 -- scripts/common.sh@352 -- # local d=1 00:04:39.786 05:45:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.786 05:45:01 -- scripts/common.sh@354 -- # echo 1 00:04:39.786 05:45:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:39.786 05:45:01 -- scripts/common.sh@365 -- # decimal 2 00:04:39.786 05:45:01 -- scripts/common.sh@352 -- # local d=2 00:04:39.786 05:45:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.786 05:45:01 -- scripts/common.sh@354 -- # echo 2 00:04:39.786 05:45:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:39.786 05:45:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:39.786 05:45:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:39.786 05:45:01 -- scripts/common.sh@367 -- # return 0 00:04:39.786 05:45:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.786 05:45:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:39.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.786 --rc genhtml_branch_coverage=1 00:04:39.786 --rc genhtml_function_coverage=1 00:04:39.786 --rc genhtml_legend=1 00:04:39.786 --rc geninfo_all_blocks=1 00:04:39.786 --rc geninfo_unexecuted_blocks=1 00:04:39.786 00:04:39.786 ' 00:04:39.787 05:45:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:39.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.787 --rc genhtml_branch_coverage=1 00:04:39.787 --rc genhtml_function_coverage=1 00:04:39.787 --rc genhtml_legend=1 00:04:39.787 --rc geninfo_all_blocks=1 00:04:39.787 --rc geninfo_unexecuted_blocks=1 00:04:39.787 00:04:39.787 ' 00:04:39.787 05:45:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:39.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.787 --rc genhtml_branch_coverage=1 00:04:39.787 --rc genhtml_function_coverage=1 00:04:39.787 --rc genhtml_legend=1 00:04:39.787 --rc geninfo_all_blocks=1 00:04:39.787 --rc geninfo_unexecuted_blocks=1 00:04:39.787 00:04:39.787 ' 00:04:39.787 05:45:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:39.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.787 --rc genhtml_branch_coverage=1 00:04:39.787 --rc genhtml_function_coverage=1 00:04:39.787 --rc genhtml_legend=1 00:04:39.787 --rc geninfo_all_blocks=1 00:04:39.787 --rc geninfo_unexecuted_blocks=1 00:04:39.787 00:04:39.787 ' 00:04:39.787 05:45:01 -- setup/test-setup.sh@10 -- # uname -s 00:04:39.787 05:45:01 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:39.787 05:45:01 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:39.787 05:45:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:39.787 05:45:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:39.787 05:45:01 -- common/autotest_common.sh@10 -- # set +x 00:04:39.787 ************************************ 00:04:39.787 START TEST acl 00:04:39.787 ************************************ 00:04:39.787 05:45:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:39.787 * Looking for test storage... 00:04:39.787 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:39.787 05:45:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:39.787 05:45:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:39.787 05:45:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:39.787 05:45:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:39.787 05:45:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:39.787 05:45:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:39.787 05:45:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:39.787 05:45:01 -- scripts/common.sh@335 -- # IFS=.-: 00:04:39.787 05:45:01 -- scripts/common.sh@335 -- # read -ra ver1 00:04:39.787 05:45:01 -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.787 05:45:01 -- scripts/common.sh@336 -- # read -ra ver2 00:04:39.787 05:45:01 -- scripts/common.sh@337 -- # local 'op=<' 00:04:39.787 05:45:01 -- scripts/common.sh@339 -- # ver1_l=2 00:04:39.787 05:45:01 -- scripts/common.sh@340 -- # ver2_l=1 00:04:39.787 05:45:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:39.787 05:45:01 -- scripts/common.sh@343 -- # case "$op" in 00:04:39.787 05:45:01 -- scripts/common.sh@344 -- # : 1 00:04:39.787 05:45:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:39.787 05:45:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.787 05:45:01 -- scripts/common.sh@364 -- # decimal 1 00:04:39.787 05:45:01 -- scripts/common.sh@352 -- # local d=1 00:04:39.787 05:45:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.787 05:45:01 -- scripts/common.sh@354 -- # echo 1 00:04:39.787 05:45:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:39.787 05:45:01 -- scripts/common.sh@365 -- # decimal 2 00:04:39.787 05:45:01 -- scripts/common.sh@352 -- # local d=2 00:04:39.787 05:45:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.787 05:45:01 -- scripts/common.sh@354 -- # echo 2 00:04:39.787 05:45:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:39.787 05:45:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:39.787 05:45:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:39.787 05:45:01 -- scripts/common.sh@367 -- # return 0 00:04:39.787 05:45:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.787 05:45:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:39.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.787 --rc genhtml_branch_coverage=1 00:04:39.787 --rc genhtml_function_coverage=1 00:04:39.787 --rc genhtml_legend=1 00:04:39.787 --rc geninfo_all_blocks=1 00:04:39.787 --rc geninfo_unexecuted_blocks=1 00:04:39.787 00:04:39.787 ' 00:04:39.787 05:45:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:39.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.787 --rc genhtml_branch_coverage=1 00:04:39.787 --rc genhtml_function_coverage=1 00:04:39.787 --rc genhtml_legend=1 00:04:39.787 --rc geninfo_all_blocks=1 00:04:39.787 --rc geninfo_unexecuted_blocks=1 00:04:39.787 00:04:39.787 ' 00:04:39.787 05:45:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:39.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.787 --rc genhtml_branch_coverage=1 00:04:39.787 --rc genhtml_function_coverage=1 00:04:39.787 --rc genhtml_legend=1 00:04:39.787 --rc geninfo_all_blocks=1 00:04:39.787 --rc geninfo_unexecuted_blocks=1 00:04:39.787 00:04:39.787 ' 00:04:39.787 05:45:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:39.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.787 --rc genhtml_branch_coverage=1 00:04:39.787 --rc genhtml_function_coverage=1 00:04:39.787 --rc genhtml_legend=1 00:04:39.787 --rc geninfo_all_blocks=1 00:04:39.787 --rc geninfo_unexecuted_blocks=1 00:04:39.787 00:04:39.787 ' 00:04:39.787 05:45:01 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:39.787 05:45:01 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:39.787 05:45:01 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:39.787 05:45:01 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:39.787 05:45:01 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:39.787 05:45:01 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:39.787 05:45:01 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:39.787 05:45:01 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:39.787 05:45:01 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:39.787 05:45:01 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:39.787 05:45:01 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:39.787 05:45:01 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:39.787 05:45:01 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:39.787 05:45:01 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:39.787 05:45:01 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:39.787 05:45:01 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:39.787 05:45:01 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:39.787 05:45:01 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:39.787 05:45:01 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:39.787 05:45:01 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:39.787 05:45:01 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:39.787 05:45:01 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:39.787 05:45:01 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:39.787 05:45:01 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:39.787 05:45:01 -- setup/acl.sh@12 -- # devs=() 00:04:39.787 05:45:01 -- setup/acl.sh@12 -- # declare -a devs 00:04:39.787 05:45:01 -- setup/acl.sh@13 -- # drivers=() 00:04:39.787 05:45:01 -- setup/acl.sh@13 -- # declare -A drivers 00:04:39.787 05:45:01 -- setup/acl.sh@51 -- # setup reset 00:04:39.787 05:45:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:39.787 05:45:01 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:40.354 05:45:01 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:40.354 05:45:01 -- setup/acl.sh@16 -- # local dev driver 00:04:40.354 05:45:01 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.354 05:45:01 -- setup/acl.sh@15 -- # setup output status 00:04:40.354 05:45:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.354 05:45:01 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:40.614 Hugepages 00:04:40.614 node hugesize free / total 00:04:40.614 05:45:02 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:40.614 05:45:02 -- setup/acl.sh@19 -- # continue 00:04:40.614 05:45:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.614 00:04:40.614 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:40.614 05:45:02 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:40.614 05:45:02 -- setup/acl.sh@19 -- # continue 00:04:40.614 05:45:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.614 05:45:02 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:40.614 05:45:02 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:40.614 05:45:02 -- setup/acl.sh@20 -- # continue 00:04:40.614 05:45:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.872 05:45:02 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:40.872 05:45:02 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:40.872 05:45:02 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:40.872 05:45:02 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:40.872 05:45:02 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:40.872 05:45:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.872 05:45:02 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:04:40.872 05:45:02 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:40.873 05:45:02 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:40.873 05:45:02 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:40.873 05:45:02 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:40.873 05:45:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.873 05:45:02 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:40.873 05:45:02 -- setup/acl.sh@54 -- # run_test denied denied 00:04:40.873 05:45:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:40.873 05:45:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:40.873 05:45:02 -- common/autotest_common.sh@10 -- # set +x 00:04:40.873 ************************************ 00:04:40.873 START TEST denied 00:04:40.873 ************************************ 00:04:40.873 05:45:02 -- common/autotest_common.sh@1114 -- # denied 00:04:40.873 05:45:02 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:40.873 05:45:02 -- setup/acl.sh@38 -- # setup output config 00:04:40.873 05:45:02 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:40.873 05:45:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.873 05:45:02 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:41.809 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:41.809 05:45:03 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:41.809 05:45:03 -- setup/acl.sh@28 -- # local dev driver 00:04:41.809 05:45:03 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:41.809 05:45:03 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:41.809 05:45:03 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:41.809 05:45:03 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:41.809 05:45:03 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:41.809 05:45:03 -- setup/acl.sh@41 -- # setup reset 00:04:41.809 05:45:03 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:41.809 05:45:03 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:42.376 ************************************ 00:04:42.376 END TEST denied 00:04:42.376 ************************************ 00:04:42.376 00:04:42.376 real 0m1.434s 00:04:42.376 user 0m0.582s 00:04:42.377 sys 0m0.792s 00:04:42.377 05:45:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:42.377 05:45:03 -- common/autotest_common.sh@10 -- # set +x 00:04:42.377 05:45:03 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:42.377 05:45:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.377 05:45:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.377 05:45:03 -- common/autotest_common.sh@10 -- # set +x 00:04:42.377 ************************************ 00:04:42.377 START TEST allowed 00:04:42.377 ************************************ 00:04:42.377 05:45:03 -- common/autotest_common.sh@1114 -- # allowed 00:04:42.377 05:45:03 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:42.377 05:45:03 -- setup/acl.sh@45 -- # setup output config 00:04:42.377 05:45:03 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:42.377 05:45:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.377 05:45:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:43.313 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.313 05:45:04 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:04:43.313 05:45:04 -- setup/acl.sh@28 -- # local dev driver 00:04:43.313 05:45:04 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:43.313 05:45:04 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:43.313 05:45:04 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:43.313 05:45:04 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:43.313 05:45:04 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:43.313 05:45:04 -- setup/acl.sh@48 -- # setup reset 00:04:43.313 05:45:04 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:43.313 05:45:04 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:43.880 00:04:43.880 real 0m1.491s 00:04:43.880 user 0m0.692s 00:04:43.880 sys 0m0.777s 00:04:43.880 05:45:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:43.880 05:45:05 -- common/autotest_common.sh@10 -- # set +x 00:04:43.880 ************************************ 00:04:43.880 END TEST allowed 00:04:43.880 ************************************ 00:04:43.880 00:04:43.880 real 0m4.272s 00:04:43.880 user 0m1.927s 00:04:43.880 sys 0m2.281s 00:04:43.880 05:45:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:43.880 05:45:05 -- common/autotest_common.sh@10 -- # set +x 00:04:43.880 ************************************ 00:04:43.880 END TEST acl 00:04:43.880 ************************************ 00:04:43.880 05:45:05 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:43.880 05:45:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.880 05:45:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.880 05:45:05 -- common/autotest_common.sh@10 -- # set +x 00:04:43.880 ************************************ 00:04:43.880 START TEST hugepages 00:04:43.880 ************************************ 00:04:43.880 05:45:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:43.880 * Looking for test storage... 00:04:43.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:43.880 05:45:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:43.880 05:45:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:43.880 05:45:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:44.141 05:45:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:44.141 05:45:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:44.141 05:45:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:44.141 05:45:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:44.141 05:45:05 -- scripts/common.sh@335 -- # IFS=.-: 00:04:44.141 05:45:05 -- scripts/common.sh@335 -- # read -ra ver1 00:04:44.141 05:45:05 -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.141 05:45:05 -- scripts/common.sh@336 -- # read -ra ver2 00:04:44.141 05:45:05 -- scripts/common.sh@337 -- # local 'op=<' 00:04:44.141 05:45:05 -- scripts/common.sh@339 -- # ver1_l=2 00:04:44.141 05:45:05 -- scripts/common.sh@340 -- # ver2_l=1 00:04:44.141 05:45:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:44.141 05:45:05 -- scripts/common.sh@343 -- # case "$op" in 00:04:44.141 05:45:05 -- scripts/common.sh@344 -- # : 1 00:04:44.141 05:45:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:44.141 05:45:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.141 05:45:05 -- scripts/common.sh@364 -- # decimal 1 00:04:44.141 05:45:05 -- scripts/common.sh@352 -- # local d=1 00:04:44.141 05:45:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.141 05:45:05 -- scripts/common.sh@354 -- # echo 1 00:04:44.141 05:45:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:44.141 05:45:05 -- scripts/common.sh@365 -- # decimal 2 00:04:44.141 05:45:05 -- scripts/common.sh@352 -- # local d=2 00:04:44.141 05:45:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.141 05:45:05 -- scripts/common.sh@354 -- # echo 2 00:04:44.141 05:45:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:44.141 05:45:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:44.141 05:45:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:44.141 05:45:05 -- scripts/common.sh@367 -- # return 0 00:04:44.141 05:45:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.141 05:45:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:44.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.141 --rc genhtml_branch_coverage=1 00:04:44.141 --rc genhtml_function_coverage=1 00:04:44.141 --rc genhtml_legend=1 00:04:44.141 --rc geninfo_all_blocks=1 00:04:44.141 --rc geninfo_unexecuted_blocks=1 00:04:44.141 00:04:44.141 ' 00:04:44.141 05:45:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:44.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.141 --rc genhtml_branch_coverage=1 00:04:44.141 --rc genhtml_function_coverage=1 00:04:44.141 --rc genhtml_legend=1 00:04:44.141 --rc geninfo_all_blocks=1 00:04:44.141 --rc geninfo_unexecuted_blocks=1 00:04:44.141 00:04:44.141 ' 00:04:44.141 05:45:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:44.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.141 --rc genhtml_branch_coverage=1 00:04:44.141 --rc genhtml_function_coverage=1 00:04:44.141 --rc genhtml_legend=1 00:04:44.141 --rc geninfo_all_blocks=1 00:04:44.141 --rc geninfo_unexecuted_blocks=1 00:04:44.141 00:04:44.141 ' 00:04:44.141 05:45:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:44.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.141 --rc genhtml_branch_coverage=1 00:04:44.141 --rc genhtml_function_coverage=1 00:04:44.141 --rc genhtml_legend=1 00:04:44.141 --rc geninfo_all_blocks=1 00:04:44.141 --rc geninfo_unexecuted_blocks=1 00:04:44.141 00:04:44.141 ' 00:04:44.141 05:45:05 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:44.141 05:45:05 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:44.141 05:45:05 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:44.141 05:45:05 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:44.141 05:45:05 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:44.141 05:45:05 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:44.141 05:45:05 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:44.141 05:45:05 -- setup/common.sh@18 -- # local node= 00:04:44.141 05:45:05 -- setup/common.sh@19 -- # local var val 00:04:44.141 05:45:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.141 05:45:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.141 05:45:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.141 05:45:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.141 05:45:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.141 05:45:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 05:45:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 4844200 kB' 'MemAvailable: 7345436 kB' 'Buffers: 2684 kB' 'Cached: 2705776 kB' 'SwapCached: 0 kB' 'Active: 454844 kB' 'Inactive: 2370072 kB' 'Active(anon): 126968 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 118100 kB' 'Mapped: 50660 kB' 'Shmem: 10512 kB' 'KReclaimable: 80472 kB' 'Slab: 180868 kB' 'SReclaimable: 80472 kB' 'SUnreclaim: 100396 kB' 'KernelStack: 6720 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411012 kB' 'Committed_AS: 318148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.141 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # continue 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 05:45:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 05:45:05 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.142 05:45:05 -- setup/common.sh@33 -- # echo 2048 00:04:44.142 05:45:05 -- setup/common.sh@33 -- # return 0 00:04:44.142 05:45:05 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:44.142 05:45:05 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:44.142 05:45:05 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:44.142 05:45:05 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:44.142 05:45:05 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:44.142 05:45:05 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:44.142 05:45:05 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:44.142 05:45:05 -- setup/hugepages.sh@207 -- # get_nodes 00:04:44.142 05:45:05 -- setup/hugepages.sh@27 -- # local node 00:04:44.142 05:45:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.142 05:45:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:44.142 05:45:05 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:44.142 05:45:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:44.142 05:45:05 -- setup/hugepages.sh@208 -- # clear_hp 00:04:44.142 05:45:05 -- setup/hugepages.sh@37 -- # local node hp 00:04:44.142 05:45:05 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:44.142 05:45:05 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:44.142 05:45:05 -- setup/hugepages.sh@41 -- # echo 0 00:04:44.142 05:45:05 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:44.142 05:45:05 -- setup/hugepages.sh@41 -- # echo 0 00:04:44.142 05:45:05 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:44.142 05:45:05 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:44.142 05:45:05 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:44.143 05:45:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:44.143 05:45:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:44.143 05:45:05 -- common/autotest_common.sh@10 -- # set +x 00:04:44.143 ************************************ 00:04:44.143 START TEST default_setup 00:04:44.143 ************************************ 00:04:44.143 05:45:05 -- common/autotest_common.sh@1114 -- # default_setup 00:04:44.143 05:45:05 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:44.143 05:45:05 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:44.143 05:45:05 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:44.143 05:45:05 -- setup/hugepages.sh@51 -- # shift 00:04:44.143 05:45:05 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:44.143 05:45:05 -- setup/hugepages.sh@52 -- # local node_ids 00:04:44.143 05:45:05 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:44.143 05:45:05 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:44.143 05:45:05 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:44.143 05:45:05 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:44.143 05:45:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:44.143 05:45:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:44.143 05:45:05 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:44.143 05:45:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:44.143 05:45:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:44.143 05:45:05 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:44.143 05:45:05 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:44.143 05:45:05 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:44.143 05:45:05 -- setup/hugepages.sh@73 -- # return 0 00:04:44.143 05:45:05 -- setup/hugepages.sh@137 -- # setup output 00:04:44.143 05:45:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.143 05:45:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:44.710 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:44.971 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:44.971 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:44.971 05:45:06 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:44.971 05:45:06 -- setup/hugepages.sh@89 -- # local node 00:04:44.971 05:45:06 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:44.971 05:45:06 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:44.971 05:45:06 -- setup/hugepages.sh@92 -- # local surp 00:04:44.971 05:45:06 -- setup/hugepages.sh@93 -- # local resv 00:04:44.971 05:45:06 -- setup/hugepages.sh@94 -- # local anon 00:04:44.971 05:45:06 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:44.971 05:45:06 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:44.971 05:45:06 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:44.971 05:45:06 -- setup/common.sh@18 -- # local node= 00:04:44.971 05:45:06 -- setup/common.sh@19 -- # local var val 00:04:44.971 05:45:06 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.971 05:45:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.971 05:45:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.971 05:45:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.971 05:45:06 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.971 05:45:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.971 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6945380 kB' 'MemAvailable: 9446448 kB' 'Buffers: 2684 kB' 'Cached: 2705768 kB' 'SwapCached: 0 kB' 'Active: 456508 kB' 'Inactive: 2370084 kB' 'Active(anon): 128632 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119524 kB' 'Mapped: 50840 kB' 'Shmem: 10488 kB' 'KReclaimable: 80112 kB' 'Slab: 180520 kB' 'SReclaimable: 80112 kB' 'SUnreclaim: 100408 kB' 'KernelStack: 6720 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.972 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.972 05:45:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.972 05:45:06 -- setup/common.sh@33 -- # echo 0 00:04:44.972 05:45:06 -- setup/common.sh@33 -- # return 0 00:04:44.972 05:45:06 -- setup/hugepages.sh@97 -- # anon=0 00:04:44.972 05:45:06 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:44.972 05:45:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.972 05:45:06 -- setup/common.sh@18 -- # local node= 00:04:44.973 05:45:06 -- setup/common.sh@19 -- # local var val 00:04:44.973 05:45:06 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.973 05:45:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.973 05:45:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.973 05:45:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.973 05:45:06 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.973 05:45:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6945824 kB' 'MemAvailable: 9446892 kB' 'Buffers: 2684 kB' 'Cached: 2705768 kB' 'SwapCached: 0 kB' 'Active: 456212 kB' 'Inactive: 2370084 kB' 'Active(anon): 128336 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119500 kB' 'Mapped: 50796 kB' 'Shmem: 10488 kB' 'KReclaimable: 80112 kB' 'Slab: 180516 kB' 'SReclaimable: 80112 kB' 'SUnreclaim: 100404 kB' 'KernelStack: 6704 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.973 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.973 05:45:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.974 05:45:06 -- setup/common.sh@33 -- # echo 0 00:04:44.974 05:45:06 -- setup/common.sh@33 -- # return 0 00:04:44.974 05:45:06 -- setup/hugepages.sh@99 -- # surp=0 00:04:44.974 05:45:06 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:44.974 05:45:06 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:44.974 05:45:06 -- setup/common.sh@18 -- # local node= 00:04:44.974 05:45:06 -- setup/common.sh@19 -- # local var val 00:04:44.974 05:45:06 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.974 05:45:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.974 05:45:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.974 05:45:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.974 05:45:06 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.974 05:45:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6945824 kB' 'MemAvailable: 9446892 kB' 'Buffers: 2684 kB' 'Cached: 2705768 kB' 'SwapCached: 0 kB' 'Active: 456232 kB' 'Inactive: 2370084 kB' 'Active(anon): 128356 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119540 kB' 'Mapped: 50664 kB' 'Shmem: 10488 kB' 'KReclaimable: 80112 kB' 'Slab: 180508 kB' 'SReclaimable: 80112 kB' 'SUnreclaim: 100396 kB' 'KernelStack: 6736 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.974 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.974 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.975 05:45:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.975 05:45:06 -- setup/common.sh@33 -- # echo 0 00:04:44.975 05:45:06 -- setup/common.sh@33 -- # return 0 00:04:44.975 05:45:06 -- setup/hugepages.sh@100 -- # resv=0 00:04:44.975 nr_hugepages=1024 00:04:44.975 05:45:06 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:44.975 resv_hugepages=0 00:04:44.975 05:45:06 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:44.975 surplus_hugepages=0 00:04:44.975 05:45:06 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:44.975 anon_hugepages=0 00:04:44.975 05:45:06 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:44.975 05:45:06 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.975 05:45:06 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:44.975 05:45:06 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:44.975 05:45:06 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:44.975 05:45:06 -- setup/common.sh@18 -- # local node= 00:04:44.975 05:45:06 -- setup/common.sh@19 -- # local var val 00:04:44.975 05:45:06 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.975 05:45:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.975 05:45:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.975 05:45:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.975 05:45:06 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.975 05:45:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.975 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.976 05:45:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6945824 kB' 'MemAvailable: 9446892 kB' 'Buffers: 2684 kB' 'Cached: 2705768 kB' 'SwapCached: 0 kB' 'Active: 455972 kB' 'Inactive: 2370084 kB' 'Active(anon): 128096 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119244 kB' 'Mapped: 50664 kB' 'Shmem: 10488 kB' 'KReclaimable: 80112 kB' 'Slab: 180508 kB' 'SReclaimable: 80112 kB' 'SUnreclaim: 100396 kB' 'KernelStack: 6720 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.976 05:45:06 -- setup/common.sh@32 -- # continue 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.976 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.236 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.236 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.237 05:45:06 -- setup/common.sh@33 -- # echo 1024 00:04:45.237 05:45:06 -- setup/common.sh@33 -- # return 0 00:04:45.237 05:45:06 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.237 05:45:06 -- setup/hugepages.sh@112 -- # get_nodes 00:04:45.237 05:45:06 -- setup/hugepages.sh@27 -- # local node 00:04:45.237 05:45:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.237 05:45:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:45.237 05:45:06 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:45.237 05:45:06 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:45.237 05:45:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.237 05:45:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.237 05:45:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:45.237 05:45:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.237 05:45:06 -- setup/common.sh@18 -- # local node=0 00:04:45.237 05:45:06 -- setup/common.sh@19 -- # local var val 00:04:45.237 05:45:06 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.237 05:45:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.237 05:45:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:45.237 05:45:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:45.237 05:45:06 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.237 05:45:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6949816 kB' 'MemUsed: 5289304 kB' 'SwapCached: 0 kB' 'Active: 456016 kB' 'Inactive: 2370084 kB' 'Active(anon): 128140 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'FilePages: 2708452 kB' 'Mapped: 50664 kB' 'AnonPages: 119300 kB' 'Shmem: 10488 kB' 'KernelStack: 6736 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80128 kB' 'Slab: 180524 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.237 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.237 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.238 05:45:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.238 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.238 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.238 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.238 05:45:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.238 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.238 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.238 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.238 05:45:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.238 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.238 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.238 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.238 05:45:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.238 05:45:06 -- setup/common.sh@32 -- # continue 00:04:45.238 05:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.238 05:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.238 05:45:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.238 05:45:06 -- setup/common.sh@33 -- # echo 0 00:04:45.238 05:45:06 -- setup/common.sh@33 -- # return 0 00:04:45.238 05:45:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.238 05:45:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.238 05:45:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.238 05:45:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.238 node0=1024 expecting 1024 00:04:45.238 05:45:06 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:45.238 05:45:06 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:45.238 00:04:45.238 real 0m0.999s 00:04:45.238 user 0m0.454s 00:04:45.238 sys 0m0.452s 00:04:45.238 05:45:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:45.238 05:45:06 -- common/autotest_common.sh@10 -- # set +x 00:04:45.238 ************************************ 00:04:45.238 END TEST default_setup 00:04:45.238 ************************************ 00:04:45.238 05:45:06 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:45.238 05:45:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.238 05:45:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.238 05:45:06 -- common/autotest_common.sh@10 -- # set +x 00:04:45.238 ************************************ 00:04:45.238 START TEST per_node_1G_alloc 00:04:45.238 ************************************ 00:04:45.238 05:45:06 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:45.238 05:45:06 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:45.238 05:45:06 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:45.238 05:45:06 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:45.238 05:45:06 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:45.238 05:45:06 -- setup/hugepages.sh@51 -- # shift 00:04:45.238 05:45:06 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:45.238 05:45:06 -- setup/hugepages.sh@52 -- # local node_ids 00:04:45.238 05:45:06 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:45.238 05:45:06 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:45.238 05:45:06 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:45.238 05:45:06 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:45.238 05:45:06 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:45.238 05:45:06 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:45.238 05:45:06 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:45.238 05:45:06 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:45.238 05:45:06 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:45.238 05:45:06 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:45.238 05:45:06 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:45.238 05:45:06 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:45.238 05:45:06 -- setup/hugepages.sh@73 -- # return 0 00:04:45.238 05:45:06 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:45.238 05:45:06 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:45.238 05:45:06 -- setup/hugepages.sh@146 -- # setup output 00:04:45.238 05:45:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.238 05:45:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:45.498 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:45.498 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:45.498 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:45.498 05:45:07 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:45.498 05:45:07 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:45.498 05:45:07 -- setup/hugepages.sh@89 -- # local node 00:04:45.498 05:45:07 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:45.498 05:45:07 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:45.498 05:45:07 -- setup/hugepages.sh@92 -- # local surp 00:04:45.498 05:45:07 -- setup/hugepages.sh@93 -- # local resv 00:04:45.498 05:45:07 -- setup/hugepages.sh@94 -- # local anon 00:04:45.498 05:45:07 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:45.498 05:45:07 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:45.498 05:45:07 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:45.498 05:45:07 -- setup/common.sh@18 -- # local node= 00:04:45.498 05:45:07 -- setup/common.sh@19 -- # local var val 00:04:45.498 05:45:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.498 05:45:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.498 05:45:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.498 05:45:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.498 05:45:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.498 05:45:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.498 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.498 05:45:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7996296 kB' 'MemAvailable: 10497376 kB' 'Buffers: 2684 kB' 'Cached: 2705768 kB' 'SwapCached: 0 kB' 'Active: 456904 kB' 'Inactive: 2370088 kB' 'Active(anon): 129028 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 120224 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 80128 kB' 'Slab: 180532 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100404 kB' 'KernelStack: 6856 kB' 'PageTables: 4660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 320284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:45.498 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.498 05:45:07 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.498 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.498 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.498 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.498 05:45:07 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.498 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.498 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.498 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.498 05:45:07 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.498 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.498 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.498 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.498 05:45:07 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.498 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.498 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.498 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.498 05:45:07 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.499 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.499 05:45:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.499 05:45:07 -- setup/common.sh@33 -- # echo 0 00:04:45.499 05:45:07 -- setup/common.sh@33 -- # return 0 00:04:45.500 05:45:07 -- setup/hugepages.sh@97 -- # anon=0 00:04:45.500 05:45:07 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:45.500 05:45:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.500 05:45:07 -- setup/common.sh@18 -- # local node= 00:04:45.500 05:45:07 -- setup/common.sh@19 -- # local var val 00:04:45.500 05:45:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.500 05:45:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.500 05:45:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.500 05:45:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.500 05:45:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.500 05:45:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.500 05:45:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7996548 kB' 'MemAvailable: 10497628 kB' 'Buffers: 2684 kB' 'Cached: 2705768 kB' 'SwapCached: 0 kB' 'Active: 456404 kB' 'Inactive: 2370088 kB' 'Active(anon): 128528 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119688 kB' 'Mapped: 50612 kB' 'Shmem: 10488 kB' 'KReclaimable: 80128 kB' 'Slab: 180548 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100420 kB' 'KernelStack: 6768 kB' 'PageTables: 4484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 320284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.500 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.500 05:45:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.761 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.761 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.761 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.761 05:45:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 05:45:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 05:45:07 -- setup/common.sh@33 -- # echo 0 00:04:45.762 05:45:07 -- setup/common.sh@33 -- # return 0 00:04:45.762 05:45:07 -- setup/hugepages.sh@99 -- # surp=0 00:04:45.762 05:45:07 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:45.762 05:45:07 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:45.762 05:45:07 -- setup/common.sh@18 -- # local node= 00:04:45.762 05:45:07 -- setup/common.sh@19 -- # local var val 00:04:45.762 05:45:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.762 05:45:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.762 05:45:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.762 05:45:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.762 05:45:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.762 05:45:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7996548 kB' 'MemAvailable: 10497628 kB' 'Buffers: 2684 kB' 'Cached: 2705768 kB' 'SwapCached: 0 kB' 'Active: 456060 kB' 'Inactive: 2370088 kB' 'Active(anon): 128184 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119344 kB' 'Mapped: 50664 kB' 'Shmem: 10488 kB' 'KReclaimable: 80128 kB' 'Slab: 180544 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100416 kB' 'KernelStack: 6720 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 320284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.763 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.764 05:45:07 -- setup/common.sh@33 -- # echo 0 00:04:45.764 05:45:07 -- setup/common.sh@33 -- # return 0 00:04:45.764 05:45:07 -- setup/hugepages.sh@100 -- # resv=0 00:04:45.764 nr_hugepages=512 00:04:45.764 05:45:07 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:45.764 resv_hugepages=0 00:04:45.764 05:45:07 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:45.764 surplus_hugepages=0 00:04:45.764 05:45:07 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:45.764 anon_hugepages=0 00:04:45.764 05:45:07 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:45.764 05:45:07 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:45.764 05:45:07 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:45.764 05:45:07 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:45.764 05:45:07 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:45.764 05:45:07 -- setup/common.sh@18 -- # local node= 00:04:45.764 05:45:07 -- setup/common.sh@19 -- # local var val 00:04:45.764 05:45:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.764 05:45:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.764 05:45:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.764 05:45:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.764 05:45:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.764 05:45:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.764 05:45:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7996548 kB' 'MemAvailable: 10497628 kB' 'Buffers: 2684 kB' 'Cached: 2705768 kB' 'SwapCached: 0 kB' 'Active: 456304 kB' 'Inactive: 2370088 kB' 'Active(anon): 128428 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119584 kB' 'Mapped: 50664 kB' 'Shmem: 10488 kB' 'KReclaimable: 80128 kB' 'Slab: 180544 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100416 kB' 'KernelStack: 6720 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 320284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.764 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.764 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.765 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.765 05:45:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.765 05:45:07 -- setup/common.sh@33 -- # echo 512 00:04:45.765 05:45:07 -- setup/common.sh@33 -- # return 0 00:04:45.765 05:45:07 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:45.765 05:45:07 -- setup/hugepages.sh@112 -- # get_nodes 00:04:45.765 05:45:07 -- setup/hugepages.sh@27 -- # local node 00:04:45.765 05:45:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.765 05:45:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:45.765 05:45:07 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:45.765 05:45:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:45.765 05:45:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.765 05:45:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.765 05:45:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:45.765 05:45:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.766 05:45:07 -- setup/common.sh@18 -- # local node=0 00:04:45.766 05:45:07 -- setup/common.sh@19 -- # local var val 00:04:45.766 05:45:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.766 05:45:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.766 05:45:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:45.766 05:45:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:45.766 05:45:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.766 05:45:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.766 05:45:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7996548 kB' 'MemUsed: 4242572 kB' 'SwapCached: 0 kB' 'Active: 456240 kB' 'Inactive: 2370088 kB' 'Active(anon): 128364 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'FilePages: 2708452 kB' 'Mapped: 50664 kB' 'AnonPages: 119572 kB' 'Shmem: 10488 kB' 'KernelStack: 6736 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80128 kB' 'Slab: 180540 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100412 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.766 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.766 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.767 05:45:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.767 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.767 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.767 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.767 05:45:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.767 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.767 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.767 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.767 05:45:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.767 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.767 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.767 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.767 05:45:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.767 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.767 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.767 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.767 05:45:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.767 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.767 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.767 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.767 05:45:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.767 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.767 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.767 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.767 05:45:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.767 05:45:07 -- setup/common.sh@32 -- # continue 00:04:45.767 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.767 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.767 05:45:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.767 05:45:07 -- setup/common.sh@33 -- # echo 0 00:04:45.767 05:45:07 -- setup/common.sh@33 -- # return 0 00:04:45.767 05:45:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.767 05:45:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.767 05:45:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.767 05:45:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.767 node0=512 expecting 512 00:04:45.767 05:45:07 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:45.767 05:45:07 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:45.767 00:04:45.767 real 0m0.556s 00:04:45.767 user 0m0.256s 00:04:45.767 sys 0m0.305s 00:04:45.767 05:45:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:45.767 05:45:07 -- common/autotest_common.sh@10 -- # set +x 00:04:45.767 ************************************ 00:04:45.767 END TEST per_node_1G_alloc 00:04:45.767 ************************************ 00:04:45.767 05:45:07 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:45.767 05:45:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.767 05:45:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.767 05:45:07 -- common/autotest_common.sh@10 -- # set +x 00:04:45.767 ************************************ 00:04:45.767 START TEST even_2G_alloc 00:04:45.767 ************************************ 00:04:45.767 05:45:07 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:45.767 05:45:07 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:45.767 05:45:07 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:45.767 05:45:07 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:45.767 05:45:07 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:45.767 05:45:07 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:45.767 05:45:07 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:45.767 05:45:07 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:45.767 05:45:07 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:45.767 05:45:07 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:45.767 05:45:07 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:45.767 05:45:07 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:45.767 05:45:07 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:45.767 05:45:07 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:45.767 05:45:07 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:45.767 05:45:07 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:45.767 05:45:07 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:45.767 05:45:07 -- setup/hugepages.sh@83 -- # : 0 00:04:45.767 05:45:07 -- setup/hugepages.sh@84 -- # : 0 00:04:45.767 05:45:07 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:45.767 05:45:07 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:45.767 05:45:07 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:45.767 05:45:07 -- setup/hugepages.sh@153 -- # setup output 00:04:45.767 05:45:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.767 05:45:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:46.026 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:46.026 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:46.026 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:46.290 05:45:07 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:46.290 05:45:07 -- setup/hugepages.sh@89 -- # local node 00:04:46.290 05:45:07 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.290 05:45:07 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.290 05:45:07 -- setup/hugepages.sh@92 -- # local surp 00:04:46.290 05:45:07 -- setup/hugepages.sh@93 -- # local resv 00:04:46.290 05:45:07 -- setup/hugepages.sh@94 -- # local anon 00:04:46.290 05:45:07 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.290 05:45:07 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.290 05:45:07 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.290 05:45:07 -- setup/common.sh@18 -- # local node= 00:04:46.290 05:45:07 -- setup/common.sh@19 -- # local var val 00:04:46.290 05:45:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.290 05:45:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.290 05:45:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.290 05:45:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.290 05:45:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.290 05:45:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 05:45:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6963776 kB' 'MemAvailable: 9464856 kB' 'Buffers: 2684 kB' 'Cached: 2705768 kB' 'SwapCached: 0 kB' 'Active: 456496 kB' 'Inactive: 2370088 kB' 'Active(anon): 128620 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119748 kB' 'Mapped: 50788 kB' 'Shmem: 10488 kB' 'KReclaimable: 80128 kB' 'Slab: 180492 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100364 kB' 'KernelStack: 6740 kB' 'PageTables: 4484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.290 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.291 05:45:07 -- setup/common.sh@33 -- # echo 0 00:04:46.291 05:45:07 -- setup/common.sh@33 -- # return 0 00:04:46.291 05:45:07 -- setup/hugepages.sh@97 -- # anon=0 00:04:46.291 05:45:07 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.291 05:45:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.291 05:45:07 -- setup/common.sh@18 -- # local node= 00:04:46.291 05:45:07 -- setup/common.sh@19 -- # local var val 00:04:46.291 05:45:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.291 05:45:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.291 05:45:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.291 05:45:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.291 05:45:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.291 05:45:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6963776 kB' 'MemAvailable: 9464856 kB' 'Buffers: 2684 kB' 'Cached: 2705768 kB' 'SwapCached: 0 kB' 'Active: 456324 kB' 'Inactive: 2370088 kB' 'Active(anon): 128448 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119568 kB' 'Mapped: 50788 kB' 'Shmem: 10488 kB' 'KReclaimable: 80128 kB' 'Slab: 180496 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100368 kB' 'KernelStack: 6708 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 05:45:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 05:45:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 05:45:07 -- setup/common.sh@33 -- # echo 0 00:04:46.292 05:45:07 -- setup/common.sh@33 -- # return 0 00:04:46.292 05:45:07 -- setup/hugepages.sh@99 -- # surp=0 00:04:46.292 05:45:07 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.292 05:45:07 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.292 05:45:07 -- setup/common.sh@18 -- # local node= 00:04:46.292 05:45:07 -- setup/common.sh@19 -- # local var val 00:04:46.292 05:45:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.292 05:45:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.293 05:45:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.293 05:45:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.293 05:45:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.293 05:45:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6963776 kB' 'MemAvailable: 9464856 kB' 'Buffers: 2684 kB' 'Cached: 2705768 kB' 'SwapCached: 0 kB' 'Active: 456352 kB' 'Inactive: 2370088 kB' 'Active(anon): 128476 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119584 kB' 'Mapped: 50664 kB' 'Shmem: 10488 kB' 'KReclaimable: 80128 kB' 'Slab: 180500 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100372 kB' 'KernelStack: 6736 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.293 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.294 05:45:07 -- setup/common.sh@33 -- # echo 0 00:04:46.294 05:45:07 -- setup/common.sh@33 -- # return 0 00:04:46.294 05:45:07 -- setup/hugepages.sh@100 -- # resv=0 00:04:46.294 nr_hugepages=1024 00:04:46.294 05:45:07 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:46.294 resv_hugepages=0 00:04:46.294 05:45:07 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.294 surplus_hugepages=0 00:04:46.294 05:45:07 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.294 anon_hugepages=0 00:04:46.294 05:45:07 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.294 05:45:07 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.294 05:45:07 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:46.294 05:45:07 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.294 05:45:07 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.294 05:45:07 -- setup/common.sh@18 -- # local node= 00:04:46.294 05:45:07 -- setup/common.sh@19 -- # local var val 00:04:46.294 05:45:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.294 05:45:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.294 05:45:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.294 05:45:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.294 05:45:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.294 05:45:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.294 05:45:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6963776 kB' 'MemAvailable: 9464856 kB' 'Buffers: 2684 kB' 'Cached: 2705768 kB' 'SwapCached: 0 kB' 'Active: 456092 kB' 'Inactive: 2370088 kB' 'Active(anon): 128216 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119324 kB' 'Mapped: 50664 kB' 'Shmem: 10488 kB' 'KReclaimable: 80128 kB' 'Slab: 180492 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100364 kB' 'KernelStack: 6736 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.294 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.294 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.295 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.295 05:45:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.296 05:45:07 -- setup/common.sh@33 -- # echo 1024 00:04:46.296 05:45:07 -- setup/common.sh@33 -- # return 0 00:04:46.296 05:45:07 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.296 05:45:07 -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.296 05:45:07 -- setup/hugepages.sh@27 -- # local node 00:04:46.296 05:45:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.296 05:45:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:46.296 05:45:07 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:46.296 05:45:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.296 05:45:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.296 05:45:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.296 05:45:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.296 05:45:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.296 05:45:07 -- setup/common.sh@18 -- # local node=0 00:04:46.296 05:45:07 -- setup/common.sh@19 -- # local var val 00:04:46.296 05:45:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.296 05:45:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.296 05:45:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.296 05:45:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.296 05:45:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.296 05:45:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6964192 kB' 'MemUsed: 5274928 kB' 'SwapCached: 0 kB' 'Active: 456320 kB' 'Inactive: 2370088 kB' 'Active(anon): 128444 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'FilePages: 2708452 kB' 'Mapped: 50664 kB' 'AnonPages: 119552 kB' 'Shmem: 10488 kB' 'KernelStack: 6720 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80128 kB' 'Slab: 180492 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100364 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.296 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.296 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.297 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.297 05:45:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.297 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.297 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.297 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.297 05:45:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.297 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.297 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.297 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.297 05:45:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.297 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.297 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.297 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.297 05:45:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.297 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.297 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.297 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.297 05:45:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.297 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.297 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.297 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.297 05:45:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.297 05:45:07 -- setup/common.sh@32 -- # continue 00:04:46.297 05:45:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.297 05:45:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.297 05:45:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.297 05:45:07 -- setup/common.sh@33 -- # echo 0 00:04:46.297 05:45:07 -- setup/common.sh@33 -- # return 0 00:04:46.297 05:45:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.297 05:45:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.297 05:45:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.297 05:45:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.297 node0=1024 expecting 1024 00:04:46.297 05:45:07 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:46.297 05:45:07 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:46.297 00:04:46.297 real 0m0.521s 00:04:46.297 user 0m0.261s 00:04:46.297 sys 0m0.295s 00:04:46.297 05:45:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:46.297 05:45:07 -- common/autotest_common.sh@10 -- # set +x 00:04:46.297 ************************************ 00:04:46.297 END TEST even_2G_alloc 00:04:46.297 ************************************ 00:04:46.297 05:45:07 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:46.297 05:45:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.297 05:45:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.297 05:45:07 -- common/autotest_common.sh@10 -- # set +x 00:04:46.297 ************************************ 00:04:46.297 START TEST odd_alloc 00:04:46.297 ************************************ 00:04:46.297 05:45:07 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:46.297 05:45:07 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:46.297 05:45:07 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:46.297 05:45:07 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:46.297 05:45:07 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:46.297 05:45:07 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:46.297 05:45:07 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:46.297 05:45:07 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:46.297 05:45:07 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:46.297 05:45:07 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:46.297 05:45:07 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:46.297 05:45:07 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:46.297 05:45:07 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:46.297 05:45:07 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:46.297 05:45:07 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:46.297 05:45:07 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.297 05:45:07 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:46.297 05:45:07 -- setup/hugepages.sh@83 -- # : 0 00:04:46.297 05:45:07 -- setup/hugepages.sh@84 -- # : 0 00:04:46.297 05:45:07 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.297 05:45:07 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:46.297 05:45:07 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:46.297 05:45:07 -- setup/hugepages.sh@160 -- # setup output 00:04:46.297 05:45:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.297 05:45:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:46.555 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:46.818 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:46.818 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:46.818 05:45:08 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:46.818 05:45:08 -- setup/hugepages.sh@89 -- # local node 00:04:46.818 05:45:08 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.818 05:45:08 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.818 05:45:08 -- setup/hugepages.sh@92 -- # local surp 00:04:46.818 05:45:08 -- setup/hugepages.sh@93 -- # local resv 00:04:46.818 05:45:08 -- setup/hugepages.sh@94 -- # local anon 00:04:46.818 05:45:08 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.818 05:45:08 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.818 05:45:08 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.818 05:45:08 -- setup/common.sh@18 -- # local node= 00:04:46.818 05:45:08 -- setup/common.sh@19 -- # local var val 00:04:46.818 05:45:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.818 05:45:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.818 05:45:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.818 05:45:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.818 05:45:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.818 05:45:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.818 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.818 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.818 05:45:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6964320 kB' 'MemAvailable: 9465400 kB' 'Buffers: 2684 kB' 'Cached: 2705768 kB' 'SwapCached: 0 kB' 'Active: 456408 kB' 'Inactive: 2370088 kB' 'Active(anon): 128532 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119684 kB' 'Mapped: 50756 kB' 'Shmem: 10488 kB' 'KReclaimable: 80128 kB' 'Slab: 180456 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100328 kB' 'KernelStack: 6696 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 320284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:46.818 05:45:08 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.818 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.818 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.818 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.818 05:45:08 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.818 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.818 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.818 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.818 05:45:08 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.818 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.818 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.818 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.818 05:45:08 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.818 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.818 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.818 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.818 05:45:08 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.818 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.819 05:45:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.819 05:45:08 -- setup/common.sh@33 -- # echo 0 00:04:46.819 05:45:08 -- setup/common.sh@33 -- # return 0 00:04:46.819 05:45:08 -- setup/hugepages.sh@97 -- # anon=0 00:04:46.819 05:45:08 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.819 05:45:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.819 05:45:08 -- setup/common.sh@18 -- # local node= 00:04:46.819 05:45:08 -- setup/common.sh@19 -- # local var val 00:04:46.819 05:45:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.819 05:45:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.819 05:45:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.819 05:45:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.819 05:45:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.819 05:45:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.819 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6964320 kB' 'MemAvailable: 9465400 kB' 'Buffers: 2684 kB' 'Cached: 2705768 kB' 'SwapCached: 0 kB' 'Active: 456340 kB' 'Inactive: 2370088 kB' 'Active(anon): 128464 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119600 kB' 'Mapped: 50668 kB' 'Shmem: 10488 kB' 'KReclaimable: 80128 kB' 'Slab: 180476 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100348 kB' 'KernelStack: 6688 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 320284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.820 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.820 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.821 05:45:08 -- setup/common.sh@33 -- # echo 0 00:04:46.821 05:45:08 -- setup/common.sh@33 -- # return 0 00:04:46.821 05:45:08 -- setup/hugepages.sh@99 -- # surp=0 00:04:46.821 05:45:08 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.821 05:45:08 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.821 05:45:08 -- setup/common.sh@18 -- # local node= 00:04:46.821 05:45:08 -- setup/common.sh@19 -- # local var val 00:04:46.821 05:45:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.821 05:45:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.821 05:45:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.821 05:45:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.821 05:45:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.821 05:45:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6964320 kB' 'MemAvailable: 9465400 kB' 'Buffers: 2684 kB' 'Cached: 2705768 kB' 'SwapCached: 0 kB' 'Active: 456052 kB' 'Inactive: 2370088 kB' 'Active(anon): 128176 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119312 kB' 'Mapped: 50668 kB' 'Shmem: 10488 kB' 'KReclaimable: 80128 kB' 'Slab: 180484 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100356 kB' 'KernelStack: 6736 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 320284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.821 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.821 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.822 05:45:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.822 05:45:08 -- setup/common.sh@33 -- # echo 0 00:04:46.822 05:45:08 -- setup/common.sh@33 -- # return 0 00:04:46.822 05:45:08 -- setup/hugepages.sh@100 -- # resv=0 00:04:46.822 nr_hugepages=1025 00:04:46.822 05:45:08 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:46.822 resv_hugepages=0 00:04:46.822 05:45:08 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.822 surplus_hugepages=0 00:04:46.822 05:45:08 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.822 anon_hugepages=0 00:04:46.822 05:45:08 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.822 05:45:08 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:46.822 05:45:08 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:46.822 05:45:08 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.822 05:45:08 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.822 05:45:08 -- setup/common.sh@18 -- # local node= 00:04:46.822 05:45:08 -- setup/common.sh@19 -- # local var val 00:04:46.822 05:45:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.822 05:45:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.822 05:45:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.822 05:45:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.822 05:45:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.822 05:45:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.822 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6964320 kB' 'MemAvailable: 9465400 kB' 'Buffers: 2684 kB' 'Cached: 2705768 kB' 'SwapCached: 0 kB' 'Active: 456312 kB' 'Inactive: 2370088 kB' 'Active(anon): 128436 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119572 kB' 'Mapped: 50668 kB' 'Shmem: 10488 kB' 'KReclaimable: 80128 kB' 'Slab: 180484 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100356 kB' 'KernelStack: 6736 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 320284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.823 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.824 05:45:08 -- setup/common.sh@33 -- # echo 1025 00:04:46.824 05:45:08 -- setup/common.sh@33 -- # return 0 00:04:46.824 05:45:08 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:46.824 05:45:08 -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.824 05:45:08 -- setup/hugepages.sh@27 -- # local node 00:04:46.824 05:45:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.824 05:45:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:46.824 05:45:08 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:46.824 05:45:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.824 05:45:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.824 05:45:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.824 05:45:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.824 05:45:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.824 05:45:08 -- setup/common.sh@18 -- # local node=0 00:04:46.824 05:45:08 -- setup/common.sh@19 -- # local var val 00:04:46.824 05:45:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.824 05:45:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.824 05:45:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.824 05:45:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.824 05:45:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.824 05:45:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6964932 kB' 'MemUsed: 5274188 kB' 'SwapCached: 0 kB' 'Active: 456228 kB' 'Inactive: 2370088 kB' 'Active(anon): 128352 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2708452 kB' 'Mapped: 50668 kB' 'AnonPages: 119456 kB' 'Shmem: 10488 kB' 'KernelStack: 6688 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80128 kB' 'Slab: 180476 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100348 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.824 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # continue 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 05:45:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 05:45:08 -- setup/common.sh@33 -- # echo 0 00:04:46.825 05:45:08 -- setup/common.sh@33 -- # return 0 00:04:46.825 05:45:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.825 05:45:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.825 05:45:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.825 05:45:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.825 node0=1025 expecting 1025 00:04:46.825 05:45:08 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:46.825 05:45:08 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:46.825 00:04:46.825 real 0m0.559s 00:04:46.825 user 0m0.258s 00:04:46.825 sys 0m0.311s 00:04:46.825 05:45:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:46.825 05:45:08 -- common/autotest_common.sh@10 -- # set +x 00:04:46.825 ************************************ 00:04:46.825 END TEST odd_alloc 00:04:46.825 ************************************ 00:04:47.084 05:45:08 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:47.084 05:45:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.084 05:45:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.084 05:45:08 -- common/autotest_common.sh@10 -- # set +x 00:04:47.084 ************************************ 00:04:47.084 START TEST custom_alloc 00:04:47.084 ************************************ 00:04:47.084 05:45:08 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:47.084 05:45:08 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:47.084 05:45:08 -- setup/hugepages.sh@169 -- # local node 00:04:47.084 05:45:08 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:47.084 05:45:08 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:47.084 05:45:08 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:47.084 05:45:08 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:47.084 05:45:08 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:47.084 05:45:08 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:47.084 05:45:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:47.084 05:45:08 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:47.084 05:45:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:47.084 05:45:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:47.084 05:45:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:47.084 05:45:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:47.084 05:45:08 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:47.084 05:45:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:47.084 05:45:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:47.084 05:45:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:47.084 05:45:08 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:47.084 05:45:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:47.084 05:45:08 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:47.084 05:45:08 -- setup/hugepages.sh@83 -- # : 0 00:04:47.084 05:45:08 -- setup/hugepages.sh@84 -- # : 0 00:04:47.084 05:45:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:47.084 05:45:08 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:47.084 05:45:08 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:47.084 05:45:08 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:47.084 05:45:08 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:47.084 05:45:08 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:47.084 05:45:08 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:47.084 05:45:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:47.084 05:45:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:47.084 05:45:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:47.084 05:45:08 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:47.084 05:45:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:47.084 05:45:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:47.084 05:45:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:47.084 05:45:08 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:47.084 05:45:08 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:47.084 05:45:08 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:47.084 05:45:08 -- setup/hugepages.sh@78 -- # return 0 00:04:47.084 05:45:08 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:47.084 05:45:08 -- setup/hugepages.sh@187 -- # setup output 00:04:47.084 05:45:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.084 05:45:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:47.346 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:47.346 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.346 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.346 05:45:08 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:47.346 05:45:08 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:47.346 05:45:08 -- setup/hugepages.sh@89 -- # local node 00:04:47.346 05:45:08 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:47.346 05:45:08 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:47.346 05:45:08 -- setup/hugepages.sh@92 -- # local surp 00:04:47.346 05:45:08 -- setup/hugepages.sh@93 -- # local resv 00:04:47.346 05:45:08 -- setup/hugepages.sh@94 -- # local anon 00:04:47.346 05:45:08 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:47.346 05:45:08 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:47.346 05:45:08 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:47.346 05:45:08 -- setup/common.sh@18 -- # local node= 00:04:47.346 05:45:08 -- setup/common.sh@19 -- # local var val 00:04:47.346 05:45:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.346 05:45:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.346 05:45:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.346 05:45:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.346 05:45:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.346 05:45:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 05:45:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8012588 kB' 'MemAvailable: 10513668 kB' 'Buffers: 2684 kB' 'Cached: 2705768 kB' 'SwapCached: 0 kB' 'Active: 456308 kB' 'Inactive: 2370088 kB' 'Active(anon): 128432 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119792 kB' 'Mapped: 50804 kB' 'Shmem: 10488 kB' 'KReclaimable: 80128 kB' 'Slab: 180488 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100360 kB' 'KernelStack: 6728 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 320284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.346 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.346 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.347 05:45:08 -- setup/common.sh@33 -- # echo 0 00:04:47.347 05:45:08 -- setup/common.sh@33 -- # return 0 00:04:47.347 05:45:08 -- setup/hugepages.sh@97 -- # anon=0 00:04:47.347 05:45:08 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:47.347 05:45:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.347 05:45:08 -- setup/common.sh@18 -- # local node= 00:04:47.347 05:45:08 -- setup/common.sh@19 -- # local var val 00:04:47.347 05:45:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.347 05:45:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.347 05:45:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.347 05:45:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.347 05:45:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.347 05:45:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8012952 kB' 'MemAvailable: 10514032 kB' 'Buffers: 2684 kB' 'Cached: 2705768 kB' 'SwapCached: 0 kB' 'Active: 456308 kB' 'Inactive: 2370088 kB' 'Active(anon): 128432 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119296 kB' 'Mapped: 50804 kB' 'Shmem: 10488 kB' 'KReclaimable: 80128 kB' 'Slab: 180480 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100352 kB' 'KernelStack: 6696 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 320284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.347 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.347 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.348 05:45:08 -- setup/common.sh@33 -- # echo 0 00:04:47.348 05:45:08 -- setup/common.sh@33 -- # return 0 00:04:47.348 05:45:08 -- setup/hugepages.sh@99 -- # surp=0 00:04:47.348 05:45:08 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:47.348 05:45:08 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:47.348 05:45:08 -- setup/common.sh@18 -- # local node= 00:04:47.348 05:45:08 -- setup/common.sh@19 -- # local var val 00:04:47.348 05:45:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.348 05:45:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.348 05:45:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.348 05:45:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.348 05:45:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.348 05:45:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8012952 kB' 'MemAvailable: 10514032 kB' 'Buffers: 2684 kB' 'Cached: 2705768 kB' 'SwapCached: 0 kB' 'Active: 456336 kB' 'Inactive: 2370088 kB' 'Active(anon): 128460 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119576 kB' 'Mapped: 50804 kB' 'Shmem: 10488 kB' 'KReclaimable: 80128 kB' 'Slab: 180480 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100352 kB' 'KernelStack: 6712 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 320284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.348 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.348 05:45:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.349 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.349 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.350 05:45:08 -- setup/common.sh@33 -- # echo 0 00:04:47.350 05:45:08 -- setup/common.sh@33 -- # return 0 00:04:47.350 05:45:08 -- setup/hugepages.sh@100 -- # resv=0 00:04:47.350 nr_hugepages=512 00:04:47.350 05:45:08 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:47.350 resv_hugepages=0 00:04:47.350 05:45:08 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:47.350 05:45:08 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:47.350 surplus_hugepages=0 00:04:47.350 anon_hugepages=0 00:04:47.350 05:45:08 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:47.350 05:45:08 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:47.350 05:45:08 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:47.350 05:45:08 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:47.350 05:45:08 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:47.350 05:45:08 -- setup/common.sh@18 -- # local node= 00:04:47.350 05:45:08 -- setup/common.sh@19 -- # local var val 00:04:47.350 05:45:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.350 05:45:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.350 05:45:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.350 05:45:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.350 05:45:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.350 05:45:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.350 05:45:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8012952 kB' 'MemAvailable: 10514032 kB' 'Buffers: 2684 kB' 'Cached: 2705768 kB' 'SwapCached: 0 kB' 'Active: 456380 kB' 'Inactive: 2370088 kB' 'Active(anon): 128504 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119592 kB' 'Mapped: 50668 kB' 'Shmem: 10488 kB' 'KReclaimable: 80128 kB' 'Slab: 180492 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100364 kB' 'KernelStack: 6720 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 320284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.350 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.350 05:45:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # continue 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 05:45:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 05:45:08 -- setup/common.sh@33 -- # echo 512 00:04:47.611 05:45:08 -- setup/common.sh@33 -- # return 0 00:04:47.611 05:45:08 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:47.611 05:45:08 -- setup/hugepages.sh@112 -- # get_nodes 00:04:47.611 05:45:08 -- setup/hugepages.sh@27 -- # local node 00:04:47.611 05:45:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.611 05:45:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:47.611 05:45:08 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:47.611 05:45:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:47.611 05:45:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.611 05:45:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.611 05:45:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:47.611 05:45:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.611 05:45:08 -- setup/common.sh@18 -- # local node=0 00:04:47.611 05:45:08 -- setup/common.sh@19 -- # local var val 00:04:47.611 05:45:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.611 05:45:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.611 05:45:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:47.611 05:45:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:47.611 05:45:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.611 05:45:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 05:45:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8012952 kB' 'MemUsed: 4226168 kB' 'SwapCached: 0 kB' 'Active: 456148 kB' 'Inactive: 2370088 kB' 'Active(anon): 128272 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2708452 kB' 'Mapped: 50668 kB' 'AnonPages: 119356 kB' 'Shmem: 10488 kB' 'KernelStack: 6720 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80128 kB' 'Slab: 180488 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100360 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:47.612 05:45:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 05:45:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.612 05:45:09 -- setup/common.sh@33 -- # echo 0 00:04:47.612 05:45:09 -- setup/common.sh@33 -- # return 0 00:04:47.612 05:45:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.612 05:45:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.612 05:45:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.612 05:45:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.612 05:45:09 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:47.612 node0=512 expecting 512 00:04:47.612 05:45:09 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:47.612 00:04:47.612 real 0m0.538s 00:04:47.612 user 0m0.283s 00:04:47.612 sys 0m0.288s 00:04:47.612 05:45:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:47.612 05:45:09 -- common/autotest_common.sh@10 -- # set +x 00:04:47.612 ************************************ 00:04:47.612 END TEST custom_alloc 00:04:47.613 ************************************ 00:04:47.613 05:45:09 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:47.613 05:45:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.613 05:45:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.613 05:45:09 -- common/autotest_common.sh@10 -- # set +x 00:04:47.613 ************************************ 00:04:47.613 START TEST no_shrink_alloc 00:04:47.613 ************************************ 00:04:47.613 05:45:09 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:47.613 05:45:09 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:47.613 05:45:09 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:47.613 05:45:09 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:47.613 05:45:09 -- setup/hugepages.sh@51 -- # shift 00:04:47.613 05:45:09 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:47.613 05:45:09 -- setup/hugepages.sh@52 -- # local node_ids 00:04:47.613 05:45:09 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:47.613 05:45:09 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:47.613 05:45:09 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:47.613 05:45:09 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:47.613 05:45:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:47.613 05:45:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:47.613 05:45:09 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:47.613 05:45:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:47.613 05:45:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:47.613 05:45:09 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:47.613 05:45:09 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:47.613 05:45:09 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:47.613 05:45:09 -- setup/hugepages.sh@73 -- # return 0 00:04:47.613 05:45:09 -- setup/hugepages.sh@198 -- # setup output 00:04:47.613 05:45:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.613 05:45:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:47.873 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:47.873 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.873 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.873 05:45:09 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:47.873 05:45:09 -- setup/hugepages.sh@89 -- # local node 00:04:47.873 05:45:09 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:47.873 05:45:09 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:47.873 05:45:09 -- setup/hugepages.sh@92 -- # local surp 00:04:47.873 05:45:09 -- setup/hugepages.sh@93 -- # local resv 00:04:47.873 05:45:09 -- setup/hugepages.sh@94 -- # local anon 00:04:47.873 05:45:09 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:47.873 05:45:09 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:47.873 05:45:09 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:47.873 05:45:09 -- setup/common.sh@18 -- # local node= 00:04:47.873 05:45:09 -- setup/common.sh@19 -- # local var val 00:04:47.873 05:45:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.873 05:45:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.873 05:45:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.873 05:45:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.873 05:45:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.873 05:45:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.873 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.873 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.873 05:45:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6973944 kB' 'MemAvailable: 9475024 kB' 'Buffers: 2684 kB' 'Cached: 2705768 kB' 'SwapCached: 0 kB' 'Active: 456700 kB' 'Inactive: 2370088 kB' 'Active(anon): 128824 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119904 kB' 'Mapped: 50820 kB' 'Shmem: 10488 kB' 'KReclaimable: 80128 kB' 'Slab: 180532 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100404 kB' 'KernelStack: 6696 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:47.873 05:45:09 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.873 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.873 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.873 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.873 05:45:09 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.873 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.873 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.873 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.873 05:45:09 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.873 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.873 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.873 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.873 05:45:09 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.873 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.873 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.873 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.873 05:45:09 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.873 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.873 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.873 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.873 05:45:09 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.873 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.873 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.873 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.873 05:45:09 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.873 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.873 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.873 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.873 05:45:09 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.873 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.873 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.873 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.873 05:45:09 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.873 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.873 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.873 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.873 05:45:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.873 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.873 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.873 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.873 05:45:09 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.873 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.873 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.873 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.873 05:45:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.873 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.873 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.873 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.873 05:45:09 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.873 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.873 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.873 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.873 05:45:09 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.874 05:45:09 -- setup/common.sh@33 -- # echo 0 00:04:47.874 05:45:09 -- setup/common.sh@33 -- # return 0 00:04:47.874 05:45:09 -- setup/hugepages.sh@97 -- # anon=0 00:04:47.874 05:45:09 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:47.874 05:45:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.874 05:45:09 -- setup/common.sh@18 -- # local node= 00:04:47.874 05:45:09 -- setup/common.sh@19 -- # local var val 00:04:47.874 05:45:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.874 05:45:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.874 05:45:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.874 05:45:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.874 05:45:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.874 05:45:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6973944 kB' 'MemAvailable: 9475024 kB' 'Buffers: 2684 kB' 'Cached: 2705768 kB' 'SwapCached: 0 kB' 'Active: 456372 kB' 'Inactive: 2370088 kB' 'Active(anon): 128496 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119592 kB' 'Mapped: 50664 kB' 'Shmem: 10488 kB' 'KReclaimable: 80128 kB' 'Slab: 180536 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100408 kB' 'KernelStack: 6720 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.875 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.876 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.876 05:45:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.876 05:45:09 -- setup/common.sh@32 -- # continue 00:04:47.876 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.876 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.876 05:45:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.138 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.138 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.138 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.138 05:45:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.138 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.138 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.138 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.138 05:45:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.138 05:45:09 -- setup/common.sh@33 -- # echo 0 00:04:48.138 05:45:09 -- setup/common.sh@33 -- # return 0 00:04:48.138 05:45:09 -- setup/hugepages.sh@99 -- # surp=0 00:04:48.138 05:45:09 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:48.138 05:45:09 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:48.138 05:45:09 -- setup/common.sh@18 -- # local node= 00:04:48.138 05:45:09 -- setup/common.sh@19 -- # local var val 00:04:48.138 05:45:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.138 05:45:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.138 05:45:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.138 05:45:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.138 05:45:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.138 05:45:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.138 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.138 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.138 05:45:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6974284 kB' 'MemAvailable: 9475364 kB' 'Buffers: 2684 kB' 'Cached: 2705768 kB' 'SwapCached: 0 kB' 'Active: 456672 kB' 'Inactive: 2370088 kB' 'Active(anon): 128796 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119844 kB' 'Mapped: 50664 kB' 'Shmem: 10488 kB' 'KReclaimable: 80128 kB' 'Slab: 180532 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100404 kB' 'KernelStack: 6688 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:48.138 05:45:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.138 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.138 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.138 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.138 05:45:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.138 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.138 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.138 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.138 05:45:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.138 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.138 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.138 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.138 05:45:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.138 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.138 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.138 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.138 05:45:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.138 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.138 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.138 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.138 05:45:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.138 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.138 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.139 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.139 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.140 05:45:09 -- setup/common.sh@33 -- # echo 0 00:04:48.140 05:45:09 -- setup/common.sh@33 -- # return 0 00:04:48.140 05:45:09 -- setup/hugepages.sh@100 -- # resv=0 00:04:48.140 nr_hugepages=1024 00:04:48.140 05:45:09 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:48.140 resv_hugepages=0 00:04:48.140 05:45:09 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:48.140 surplus_hugepages=0 00:04:48.140 05:45:09 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:48.140 anon_hugepages=0 00:04:48.140 05:45:09 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:48.140 05:45:09 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.140 05:45:09 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:48.140 05:45:09 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:48.140 05:45:09 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:48.140 05:45:09 -- setup/common.sh@18 -- # local node= 00:04:48.140 05:45:09 -- setup/common.sh@19 -- # local var val 00:04:48.140 05:45:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.140 05:45:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.140 05:45:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.140 05:45:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.140 05:45:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.140 05:45:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6974284 kB' 'MemAvailable: 9475364 kB' 'Buffers: 2684 kB' 'Cached: 2705768 kB' 'SwapCached: 0 kB' 'Active: 456024 kB' 'Inactive: 2370088 kB' 'Active(anon): 128148 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119292 kB' 'Mapped: 50664 kB' 'Shmem: 10488 kB' 'KReclaimable: 80128 kB' 'Slab: 180532 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100404 kB' 'KernelStack: 6772 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.140 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.140 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.141 05:45:09 -- setup/common.sh@33 -- # echo 1024 00:04:48.141 05:45:09 -- setup/common.sh@33 -- # return 0 00:04:48.141 05:45:09 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.141 05:45:09 -- setup/hugepages.sh@112 -- # get_nodes 00:04:48.141 05:45:09 -- setup/hugepages.sh@27 -- # local node 00:04:48.141 05:45:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.141 05:45:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:48.141 05:45:09 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:48.141 05:45:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:48.141 05:45:09 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:48.141 05:45:09 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:48.141 05:45:09 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:48.141 05:45:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.141 05:45:09 -- setup/common.sh@18 -- # local node=0 00:04:48.141 05:45:09 -- setup/common.sh@19 -- # local var val 00:04:48.141 05:45:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.141 05:45:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.141 05:45:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:48.141 05:45:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:48.141 05:45:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.141 05:45:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6974284 kB' 'MemUsed: 5264836 kB' 'SwapCached: 0 kB' 'Active: 456428 kB' 'Inactive: 2370088 kB' 'Active(anon): 128552 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2708456 kB' 'Mapped: 50664 kB' 'AnonPages: 119476 kB' 'Shmem: 10488 kB' 'KernelStack: 6756 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80128 kB' 'Slab: 180520 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.141 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.141 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.142 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.142 05:45:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.142 05:45:09 -- setup/common.sh@33 -- # echo 0 00:04:48.142 05:45:09 -- setup/common.sh@33 -- # return 0 00:04:48.142 05:45:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:48.142 05:45:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:48.142 05:45:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:48.142 05:45:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:48.142 node0=1024 expecting 1024 00:04:48.142 05:45:09 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:48.142 05:45:09 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:48.142 05:45:09 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:48.142 05:45:09 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:48.142 05:45:09 -- setup/hugepages.sh@202 -- # setup output 00:04:48.142 05:45:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.142 05:45:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:48.421 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:48.421 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:48.421 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:48.421 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:48.421 05:45:09 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:48.421 05:45:09 -- setup/hugepages.sh@89 -- # local node 00:04:48.421 05:45:09 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:48.421 05:45:09 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:48.421 05:45:09 -- setup/hugepages.sh@92 -- # local surp 00:04:48.421 05:45:09 -- setup/hugepages.sh@93 -- # local resv 00:04:48.421 05:45:09 -- setup/hugepages.sh@94 -- # local anon 00:04:48.421 05:45:09 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:48.421 05:45:09 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:48.421 05:45:09 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:48.421 05:45:09 -- setup/common.sh@18 -- # local node= 00:04:48.421 05:45:09 -- setup/common.sh@19 -- # local var val 00:04:48.421 05:45:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.421 05:45:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.421 05:45:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.421 05:45:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.421 05:45:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.421 05:45:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.421 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.421 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6970700 kB' 'MemAvailable: 9471784 kB' 'Buffers: 2684 kB' 'Cached: 2705772 kB' 'SwapCached: 0 kB' 'Active: 456868 kB' 'Inactive: 2370092 kB' 'Active(anon): 128992 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370092 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120088 kB' 'Mapped: 50784 kB' 'Shmem: 10488 kB' 'KReclaimable: 80128 kB' 'Slab: 180536 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100408 kB' 'KernelStack: 6824 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.422 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.422 05:45:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.422 05:45:09 -- setup/common.sh@33 -- # echo 0 00:04:48.422 05:45:09 -- setup/common.sh@33 -- # return 0 00:04:48.423 05:45:09 -- setup/hugepages.sh@97 -- # anon=0 00:04:48.423 05:45:09 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:48.423 05:45:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.423 05:45:09 -- setup/common.sh@18 -- # local node= 00:04:48.423 05:45:09 -- setup/common.sh@19 -- # local var val 00:04:48.423 05:45:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.423 05:45:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.423 05:45:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.423 05:45:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.423 05:45:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.423 05:45:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6970700 kB' 'MemAvailable: 9471784 kB' 'Buffers: 2684 kB' 'Cached: 2705772 kB' 'SwapCached: 0 kB' 'Active: 456464 kB' 'Inactive: 2370092 kB' 'Active(anon): 128588 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370092 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119672 kB' 'Mapped: 50784 kB' 'Shmem: 10488 kB' 'KReclaimable: 80128 kB' 'Slab: 180520 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100392 kB' 'KernelStack: 6696 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:09 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.423 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.423 05:45:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.423 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.423 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.424 05:45:10 -- setup/common.sh@33 -- # echo 0 00:04:48.424 05:45:10 -- setup/common.sh@33 -- # return 0 00:04:48.424 05:45:10 -- setup/hugepages.sh@99 -- # surp=0 00:04:48.424 05:45:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:48.424 05:45:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:48.424 05:45:10 -- setup/common.sh@18 -- # local node= 00:04:48.424 05:45:10 -- setup/common.sh@19 -- # local var val 00:04:48.424 05:45:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.424 05:45:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.424 05:45:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.424 05:45:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.424 05:45:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.424 05:45:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6970700 kB' 'MemAvailable: 9471784 kB' 'Buffers: 2684 kB' 'Cached: 2705772 kB' 'SwapCached: 0 kB' 'Active: 456444 kB' 'Inactive: 2370092 kB' 'Active(anon): 128568 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370092 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119652 kB' 'Mapped: 50664 kB' 'Shmem: 10488 kB' 'KReclaimable: 80128 kB' 'Slab: 180520 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100392 kB' 'KernelStack: 6720 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.424 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.424 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.425 05:45:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.425 05:45:10 -- setup/common.sh@33 -- # echo 0 00:04:48.425 05:45:10 -- setup/common.sh@33 -- # return 0 00:04:48.425 05:45:10 -- setup/hugepages.sh@100 -- # resv=0 00:04:48.425 nr_hugepages=1024 00:04:48.425 05:45:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:48.425 resv_hugepages=0 00:04:48.425 05:45:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:48.425 surplus_hugepages=0 00:04:48.425 05:45:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:48.425 anon_hugepages=0 00:04:48.425 05:45:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:48.425 05:45:10 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.425 05:45:10 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:48.425 05:45:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:48.425 05:45:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:48.425 05:45:10 -- setup/common.sh@18 -- # local node= 00:04:48.425 05:45:10 -- setup/common.sh@19 -- # local var val 00:04:48.425 05:45:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.425 05:45:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.425 05:45:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.425 05:45:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.425 05:45:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.425 05:45:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.425 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.426 05:45:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6970700 kB' 'MemAvailable: 9471784 kB' 'Buffers: 2684 kB' 'Cached: 2705772 kB' 'SwapCached: 0 kB' 'Active: 456444 kB' 'Inactive: 2370092 kB' 'Active(anon): 128568 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370092 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119652 kB' 'Mapped: 50664 kB' 'Shmem: 10488 kB' 'KReclaimable: 80128 kB' 'Slab: 180520 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100392 kB' 'KernelStack: 6720 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:48.426 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.697 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.698 05:45:10 -- setup/common.sh@33 -- # echo 1024 00:04:48.698 05:45:10 -- setup/common.sh@33 -- # return 0 00:04:48.698 05:45:10 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.698 05:45:10 -- setup/hugepages.sh@112 -- # get_nodes 00:04:48.698 05:45:10 -- setup/hugepages.sh@27 -- # local node 00:04:48.698 05:45:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.698 05:45:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:48.698 05:45:10 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:48.698 05:45:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:48.698 05:45:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:48.698 05:45:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:48.698 05:45:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:48.698 05:45:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.698 05:45:10 -- setup/common.sh@18 -- # local node=0 00:04:48.698 05:45:10 -- setup/common.sh@19 -- # local var val 00:04:48.698 05:45:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.698 05:45:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.698 05:45:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:48.698 05:45:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:48.698 05:45:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.698 05:45:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6970700 kB' 'MemUsed: 5268420 kB' 'SwapCached: 0 kB' 'Active: 456356 kB' 'Inactive: 2370092 kB' 'Active(anon): 128480 kB' 'Inactive(anon): 0 kB' 'Active(file): 327876 kB' 'Inactive(file): 2370092 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2708456 kB' 'Mapped: 50664 kB' 'AnonPages: 119560 kB' 'Shmem: 10488 kB' 'KernelStack: 6720 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80128 kB' 'Slab: 180520 kB' 'SReclaimable: 80128 kB' 'SUnreclaim: 100392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # continue 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 05:45:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 05:45:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.699 05:45:10 -- setup/common.sh@33 -- # echo 0 00:04:48.699 05:45:10 -- setup/common.sh@33 -- # return 0 00:04:48.699 05:45:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:48.699 05:45:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:48.699 05:45:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:48.699 05:45:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:48.699 node0=1024 expecting 1024 00:04:48.699 05:45:10 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:48.699 05:45:10 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:48.699 00:04:48.699 real 0m1.035s 00:04:48.699 user 0m0.558s 00:04:48.699 sys 0m0.546s 00:04:48.699 05:45:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:48.699 05:45:10 -- common/autotest_common.sh@10 -- # set +x 00:04:48.699 ************************************ 00:04:48.699 END TEST no_shrink_alloc 00:04:48.699 ************************************ 00:04:48.699 05:45:10 -- setup/hugepages.sh@217 -- # clear_hp 00:04:48.699 05:45:10 -- setup/hugepages.sh@37 -- # local node hp 00:04:48.699 05:45:10 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:48.699 05:45:10 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:48.699 05:45:10 -- setup/hugepages.sh@41 -- # echo 0 00:04:48.699 05:45:10 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:48.699 05:45:10 -- setup/hugepages.sh@41 -- # echo 0 00:04:48.699 05:45:10 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:48.699 05:45:10 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:48.699 00:04:48.699 real 0m4.737s 00:04:48.699 user 0m2.299s 00:04:48.699 sys 0m2.486s 00:04:48.699 05:45:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:48.699 05:45:10 -- common/autotest_common.sh@10 -- # set +x 00:04:48.699 ************************************ 00:04:48.699 END TEST hugepages 00:04:48.699 ************************************ 00:04:48.699 05:45:10 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:48.699 05:45:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:48.699 05:45:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.699 05:45:10 -- common/autotest_common.sh@10 -- # set +x 00:04:48.699 ************************************ 00:04:48.699 START TEST driver 00:04:48.699 ************************************ 00:04:48.699 05:45:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:48.699 * Looking for test storage... 00:04:48.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:48.699 05:45:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:48.699 05:45:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:48.699 05:45:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:48.958 05:45:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:48.958 05:45:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:48.958 05:45:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:48.959 05:45:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:48.959 05:45:10 -- scripts/common.sh@335 -- # IFS=.-: 00:04:48.959 05:45:10 -- scripts/common.sh@335 -- # read -ra ver1 00:04:48.959 05:45:10 -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.959 05:45:10 -- scripts/common.sh@336 -- # read -ra ver2 00:04:48.959 05:45:10 -- scripts/common.sh@337 -- # local 'op=<' 00:04:48.959 05:45:10 -- scripts/common.sh@339 -- # ver1_l=2 00:04:48.959 05:45:10 -- scripts/common.sh@340 -- # ver2_l=1 00:04:48.959 05:45:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:48.959 05:45:10 -- scripts/common.sh@343 -- # case "$op" in 00:04:48.959 05:45:10 -- scripts/common.sh@344 -- # : 1 00:04:48.959 05:45:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:48.959 05:45:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.959 05:45:10 -- scripts/common.sh@364 -- # decimal 1 00:04:48.959 05:45:10 -- scripts/common.sh@352 -- # local d=1 00:04:48.959 05:45:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.959 05:45:10 -- scripts/common.sh@354 -- # echo 1 00:04:48.959 05:45:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:48.959 05:45:10 -- scripts/common.sh@365 -- # decimal 2 00:04:48.959 05:45:10 -- scripts/common.sh@352 -- # local d=2 00:04:48.959 05:45:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.959 05:45:10 -- scripts/common.sh@354 -- # echo 2 00:04:48.959 05:45:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:48.959 05:45:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:48.959 05:45:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:48.959 05:45:10 -- scripts/common.sh@367 -- # return 0 00:04:48.959 05:45:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.959 05:45:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:48.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.959 --rc genhtml_branch_coverage=1 00:04:48.959 --rc genhtml_function_coverage=1 00:04:48.959 --rc genhtml_legend=1 00:04:48.959 --rc geninfo_all_blocks=1 00:04:48.959 --rc geninfo_unexecuted_blocks=1 00:04:48.959 00:04:48.959 ' 00:04:48.959 05:45:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:48.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.959 --rc genhtml_branch_coverage=1 00:04:48.959 --rc genhtml_function_coverage=1 00:04:48.959 --rc genhtml_legend=1 00:04:48.959 --rc geninfo_all_blocks=1 00:04:48.959 --rc geninfo_unexecuted_blocks=1 00:04:48.959 00:04:48.959 ' 00:04:48.959 05:45:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:48.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.959 --rc genhtml_branch_coverage=1 00:04:48.959 --rc genhtml_function_coverage=1 00:04:48.959 --rc genhtml_legend=1 00:04:48.959 --rc geninfo_all_blocks=1 00:04:48.959 --rc geninfo_unexecuted_blocks=1 00:04:48.959 00:04:48.959 ' 00:04:48.959 05:45:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:48.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.959 --rc genhtml_branch_coverage=1 00:04:48.959 --rc genhtml_function_coverage=1 00:04:48.959 --rc genhtml_legend=1 00:04:48.959 --rc geninfo_all_blocks=1 00:04:48.959 --rc geninfo_unexecuted_blocks=1 00:04:48.959 00:04:48.959 ' 00:04:48.959 05:45:10 -- setup/driver.sh@68 -- # setup reset 00:04:48.959 05:45:10 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:48.959 05:45:10 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:49.527 05:45:10 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:49.527 05:45:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.527 05:45:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.527 05:45:10 -- common/autotest_common.sh@10 -- # set +x 00:04:49.527 ************************************ 00:04:49.527 START TEST guess_driver 00:04:49.527 ************************************ 00:04:49.527 05:45:10 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:49.527 05:45:10 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:49.527 05:45:10 -- setup/driver.sh@47 -- # local fail=0 00:04:49.527 05:45:10 -- setup/driver.sh@49 -- # pick_driver 00:04:49.527 05:45:10 -- setup/driver.sh@36 -- # vfio 00:04:49.527 05:45:10 -- setup/driver.sh@21 -- # local iommu_grups 00:04:49.527 05:45:10 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:49.527 05:45:10 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:49.527 05:45:10 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:49.527 05:45:10 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:49.527 05:45:10 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:49.527 05:45:10 -- setup/driver.sh@32 -- # return 1 00:04:49.527 05:45:10 -- setup/driver.sh@38 -- # uio 00:04:49.527 05:45:10 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:49.527 05:45:10 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:49.527 05:45:10 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:49.527 05:45:10 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:49.527 05:45:10 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:49.527 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:49.527 05:45:10 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:49.527 05:45:10 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:49.527 05:45:10 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:49.527 Looking for driver=uio_pci_generic 00:04:49.527 05:45:10 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:49.527 05:45:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.527 05:45:10 -- setup/driver.sh@45 -- # setup output config 00:04:49.527 05:45:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.527 05:45:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:50.095 05:45:11 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:50.095 05:45:11 -- setup/driver.sh@58 -- # continue 00:04:50.095 05:45:11 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.095 05:45:11 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.095 05:45:11 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:50.095 05:45:11 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.095 05:45:11 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.095 05:45:11 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:50.095 05:45:11 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.353 05:45:11 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:50.353 05:45:11 -- setup/driver.sh@65 -- # setup reset 00:04:50.353 05:45:11 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:50.353 05:45:11 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:50.921 ************************************ 00:04:50.921 END TEST guess_driver 00:04:50.921 ************************************ 00:04:50.921 00:04:50.921 real 0m1.387s 00:04:50.921 user 0m0.548s 00:04:50.921 sys 0m0.847s 00:04:50.921 05:45:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:50.921 05:45:12 -- common/autotest_common.sh@10 -- # set +x 00:04:50.921 00:04:50.921 real 0m2.152s 00:04:50.921 user 0m0.872s 00:04:50.921 sys 0m1.355s 00:04:50.921 05:45:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:50.921 05:45:12 -- common/autotest_common.sh@10 -- # set +x 00:04:50.921 ************************************ 00:04:50.921 END TEST driver 00:04:50.921 ************************************ 00:04:50.921 05:45:12 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:50.921 05:45:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.921 05:45:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.921 05:45:12 -- common/autotest_common.sh@10 -- # set +x 00:04:50.921 ************************************ 00:04:50.921 START TEST devices 00:04:50.921 ************************************ 00:04:50.921 05:45:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:50.921 * Looking for test storage... 00:04:50.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:50.921 05:45:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:50.921 05:45:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:50.921 05:45:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:51.179 05:45:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:51.179 05:45:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:51.179 05:45:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:51.179 05:45:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:51.179 05:45:12 -- scripts/common.sh@335 -- # IFS=.-: 00:04:51.179 05:45:12 -- scripts/common.sh@335 -- # read -ra ver1 00:04:51.179 05:45:12 -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.180 05:45:12 -- scripts/common.sh@336 -- # read -ra ver2 00:04:51.180 05:45:12 -- scripts/common.sh@337 -- # local 'op=<' 00:04:51.180 05:45:12 -- scripts/common.sh@339 -- # ver1_l=2 00:04:51.180 05:45:12 -- scripts/common.sh@340 -- # ver2_l=1 00:04:51.180 05:45:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:51.180 05:45:12 -- scripts/common.sh@343 -- # case "$op" in 00:04:51.180 05:45:12 -- scripts/common.sh@344 -- # : 1 00:04:51.180 05:45:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:51.180 05:45:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.180 05:45:12 -- scripts/common.sh@364 -- # decimal 1 00:04:51.180 05:45:12 -- scripts/common.sh@352 -- # local d=1 00:04:51.180 05:45:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.180 05:45:12 -- scripts/common.sh@354 -- # echo 1 00:04:51.180 05:45:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:51.180 05:45:12 -- scripts/common.sh@365 -- # decimal 2 00:04:51.180 05:45:12 -- scripts/common.sh@352 -- # local d=2 00:04:51.180 05:45:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.180 05:45:12 -- scripts/common.sh@354 -- # echo 2 00:04:51.180 05:45:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:51.180 05:45:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:51.180 05:45:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:51.180 05:45:12 -- scripts/common.sh@367 -- # return 0 00:04:51.180 05:45:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.180 05:45:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:51.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.180 --rc genhtml_branch_coverage=1 00:04:51.180 --rc genhtml_function_coverage=1 00:04:51.180 --rc genhtml_legend=1 00:04:51.180 --rc geninfo_all_blocks=1 00:04:51.180 --rc geninfo_unexecuted_blocks=1 00:04:51.180 00:04:51.180 ' 00:04:51.180 05:45:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:51.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.180 --rc genhtml_branch_coverage=1 00:04:51.180 --rc genhtml_function_coverage=1 00:04:51.180 --rc genhtml_legend=1 00:04:51.180 --rc geninfo_all_blocks=1 00:04:51.180 --rc geninfo_unexecuted_blocks=1 00:04:51.180 00:04:51.180 ' 00:04:51.180 05:45:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:51.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.180 --rc genhtml_branch_coverage=1 00:04:51.180 --rc genhtml_function_coverage=1 00:04:51.180 --rc genhtml_legend=1 00:04:51.180 --rc geninfo_all_blocks=1 00:04:51.180 --rc geninfo_unexecuted_blocks=1 00:04:51.180 00:04:51.180 ' 00:04:51.180 05:45:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:51.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.180 --rc genhtml_branch_coverage=1 00:04:51.180 --rc genhtml_function_coverage=1 00:04:51.180 --rc genhtml_legend=1 00:04:51.180 --rc geninfo_all_blocks=1 00:04:51.180 --rc geninfo_unexecuted_blocks=1 00:04:51.180 00:04:51.180 ' 00:04:51.180 05:45:12 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:51.180 05:45:12 -- setup/devices.sh@192 -- # setup reset 00:04:51.180 05:45:12 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:51.180 05:45:12 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:51.748 05:45:13 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:51.748 05:45:13 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:51.748 05:45:13 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:51.748 05:45:13 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:51.748 05:45:13 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:51.748 05:45:13 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:51.748 05:45:13 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:51.748 05:45:13 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:51.748 05:45:13 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:51.748 05:45:13 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:51.748 05:45:13 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:51.748 05:45:13 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:51.748 05:45:13 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:51.748 05:45:13 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:51.748 05:45:13 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:51.748 05:45:13 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:51.748 05:45:13 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:51.748 05:45:13 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:51.748 05:45:13 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:51.748 05:45:13 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:51.748 05:45:13 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:51.748 05:45:13 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:51.748 05:45:13 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:51.748 05:45:13 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:51.748 05:45:13 -- setup/devices.sh@196 -- # blocks=() 00:04:51.748 05:45:13 -- setup/devices.sh@196 -- # declare -a blocks 00:04:51.748 05:45:13 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:51.748 05:45:13 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:51.748 05:45:13 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:51.748 05:45:13 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:51.748 05:45:13 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:51.748 05:45:13 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:51.748 05:45:13 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:51.748 05:45:13 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:51.748 05:45:13 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:51.748 05:45:13 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:51.748 05:45:13 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:51.748 No valid GPT data, bailing 00:04:51.748 05:45:13 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:51.748 05:45:13 -- scripts/common.sh@393 -- # pt= 00:04:51.748 05:45:13 -- scripts/common.sh@394 -- # return 1 00:04:51.748 05:45:13 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:51.748 05:45:13 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:51.748 05:45:13 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:51.748 05:45:13 -- setup/common.sh@80 -- # echo 5368709120 00:04:51.748 05:45:13 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:51.748 05:45:13 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:51.748 05:45:13 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:51.748 05:45:13 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:51.748 05:45:13 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:51.748 05:45:13 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:51.748 05:45:13 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:51.748 05:45:13 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:51.748 05:45:13 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:51.748 05:45:13 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:51.748 05:45:13 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:52.008 No valid GPT data, bailing 00:04:52.008 05:45:13 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:52.008 05:45:13 -- scripts/common.sh@393 -- # pt= 00:04:52.008 05:45:13 -- scripts/common.sh@394 -- # return 1 00:04:52.008 05:45:13 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:52.008 05:45:13 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:52.008 05:45:13 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:52.008 05:45:13 -- setup/common.sh@80 -- # echo 4294967296 00:04:52.008 05:45:13 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:52.008 05:45:13 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:52.008 05:45:13 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:52.008 05:45:13 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:52.008 05:45:13 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:52.008 05:45:13 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:52.008 05:45:13 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:52.008 05:45:13 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:52.008 05:45:13 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:52.008 05:45:13 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:52.008 05:45:13 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:52.008 No valid GPT data, bailing 00:04:52.008 05:45:13 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:52.008 05:45:13 -- scripts/common.sh@393 -- # pt= 00:04:52.008 05:45:13 -- scripts/common.sh@394 -- # return 1 00:04:52.008 05:45:13 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:52.008 05:45:13 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:52.008 05:45:13 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:52.008 05:45:13 -- setup/common.sh@80 -- # echo 4294967296 00:04:52.008 05:45:13 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:52.008 05:45:13 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:52.008 05:45:13 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:52.008 05:45:13 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:52.008 05:45:13 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:52.008 05:45:13 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:52.008 05:45:13 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:52.008 05:45:13 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:52.008 05:45:13 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:52.008 05:45:13 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:52.008 05:45:13 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:52.008 No valid GPT data, bailing 00:04:52.008 05:45:13 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:52.008 05:45:13 -- scripts/common.sh@393 -- # pt= 00:04:52.008 05:45:13 -- scripts/common.sh@394 -- # return 1 00:04:52.008 05:45:13 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:52.008 05:45:13 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:52.008 05:45:13 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:52.008 05:45:13 -- setup/common.sh@80 -- # echo 4294967296 00:04:52.008 05:45:13 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:52.008 05:45:13 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:52.008 05:45:13 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:52.008 05:45:13 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:52.008 05:45:13 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:52.008 05:45:13 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:52.008 05:45:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.008 05:45:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.008 05:45:13 -- common/autotest_common.sh@10 -- # set +x 00:04:52.008 ************************************ 00:04:52.008 START TEST nvme_mount 00:04:52.008 ************************************ 00:04:52.008 05:45:13 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:52.008 05:45:13 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:52.008 05:45:13 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:52.008 05:45:13 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.008 05:45:13 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:52.008 05:45:13 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:52.008 05:45:13 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:52.008 05:45:13 -- setup/common.sh@40 -- # local part_no=1 00:04:52.008 05:45:13 -- setup/common.sh@41 -- # local size=1073741824 00:04:52.008 05:45:13 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:52.008 05:45:13 -- setup/common.sh@44 -- # parts=() 00:04:52.008 05:45:13 -- setup/common.sh@44 -- # local parts 00:04:52.008 05:45:13 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:52.008 05:45:13 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:52.008 05:45:13 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:52.008 05:45:13 -- setup/common.sh@46 -- # (( part++ )) 00:04:52.008 05:45:13 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:52.008 05:45:13 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:52.008 05:45:13 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:52.008 05:45:13 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:53.386 Creating new GPT entries in memory. 00:04:53.386 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:53.386 other utilities. 00:04:53.386 05:45:14 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:53.386 05:45:14 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:53.386 05:45:14 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:53.386 05:45:14 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:53.386 05:45:14 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:54.324 Creating new GPT entries in memory. 00:04:54.324 The operation has completed successfully. 00:04:54.324 05:45:15 -- setup/common.sh@57 -- # (( part++ )) 00:04:54.324 05:45:15 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:54.324 05:45:15 -- setup/common.sh@62 -- # wait 63861 00:04:54.324 05:45:15 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.324 05:45:15 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:54.324 05:45:15 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.324 05:45:15 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:54.324 05:45:15 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:54.324 05:45:15 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.324 05:45:15 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:54.324 05:45:15 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:54.324 05:45:15 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:54.324 05:45:15 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.324 05:45:15 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:54.324 05:45:15 -- setup/devices.sh@53 -- # local found=0 00:04:54.324 05:45:15 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:54.324 05:45:15 -- setup/devices.sh@56 -- # : 00:04:54.324 05:45:15 -- setup/devices.sh@59 -- # local pci status 00:04:54.324 05:45:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.324 05:45:15 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:54.324 05:45:15 -- setup/devices.sh@47 -- # setup output config 00:04:54.324 05:45:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.324 05:45:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:54.324 05:45:15 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:54.325 05:45:15 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:54.325 05:45:15 -- setup/devices.sh@63 -- # found=1 00:04:54.325 05:45:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.325 05:45:15 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:54.325 05:45:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.892 05:45:16 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:54.892 05:45:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.892 05:45:16 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:54.892 05:45:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.892 05:45:16 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:54.892 05:45:16 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:54.892 05:45:16 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.892 05:45:16 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:54.892 05:45:16 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:54.892 05:45:16 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:54.892 05:45:16 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.892 05:45:16 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.892 05:45:16 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:54.892 05:45:16 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:54.892 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:54.892 05:45:16 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:54.892 05:45:16 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:55.151 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:55.151 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:55.151 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:55.151 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:55.151 05:45:16 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:55.151 05:45:16 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:55.151 05:45:16 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.151 05:45:16 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:55.151 05:45:16 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:55.151 05:45:16 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.151 05:45:16 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:55.151 05:45:16 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:55.151 05:45:16 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:55.151 05:45:16 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.151 05:45:16 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:55.151 05:45:16 -- setup/devices.sh@53 -- # local found=0 00:04:55.151 05:45:16 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:55.151 05:45:16 -- setup/devices.sh@56 -- # : 00:04:55.151 05:45:16 -- setup/devices.sh@59 -- # local pci status 00:04:55.151 05:45:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.151 05:45:16 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:55.151 05:45:16 -- setup/devices.sh@47 -- # setup output config 00:04:55.151 05:45:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.151 05:45:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:55.409 05:45:16 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:55.409 05:45:16 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:55.409 05:45:16 -- setup/devices.sh@63 -- # found=1 00:04:55.409 05:45:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.410 05:45:16 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:55.410 05:45:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.668 05:45:17 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:55.668 05:45:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.668 05:45:17 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:55.668 05:45:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.927 05:45:17 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:55.927 05:45:17 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:55.927 05:45:17 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.927 05:45:17 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:55.927 05:45:17 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:55.927 05:45:17 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.927 05:45:17 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:04:55.927 05:45:17 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:55.927 05:45:17 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:55.928 05:45:17 -- setup/devices.sh@50 -- # local mount_point= 00:04:55.928 05:45:17 -- setup/devices.sh@51 -- # local test_file= 00:04:55.928 05:45:17 -- setup/devices.sh@53 -- # local found=0 00:04:55.928 05:45:17 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:55.928 05:45:17 -- setup/devices.sh@59 -- # local pci status 00:04:55.928 05:45:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.928 05:45:17 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:55.928 05:45:17 -- setup/devices.sh@47 -- # setup output config 00:04:55.928 05:45:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.928 05:45:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:56.187 05:45:17 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:56.187 05:45:17 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:56.187 05:45:17 -- setup/devices.sh@63 -- # found=1 00:04:56.187 05:45:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.187 05:45:17 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:56.187 05:45:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.446 05:45:17 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:56.446 05:45:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.446 05:45:17 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:56.446 05:45:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.446 05:45:18 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:56.446 05:45:18 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:56.446 05:45:18 -- setup/devices.sh@68 -- # return 0 00:04:56.446 05:45:18 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:56.446 05:45:18 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.446 05:45:18 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:56.446 05:45:18 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:56.446 05:45:18 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:56.446 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:56.446 00:04:56.446 real 0m4.455s 00:04:56.446 user 0m0.993s 00:04:56.446 sys 0m1.148s 00:04:56.446 05:45:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:56.446 ************************************ 00:04:56.446 END TEST nvme_mount 00:04:56.446 ************************************ 00:04:56.446 05:45:18 -- common/autotest_common.sh@10 -- # set +x 00:04:56.705 05:45:18 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:56.705 05:45:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.705 05:45:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.705 05:45:18 -- common/autotest_common.sh@10 -- # set +x 00:04:56.705 ************************************ 00:04:56.705 START TEST dm_mount 00:04:56.705 ************************************ 00:04:56.705 05:45:18 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:56.705 05:45:18 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:56.705 05:45:18 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:56.705 05:45:18 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:56.705 05:45:18 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:56.705 05:45:18 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:56.705 05:45:18 -- setup/common.sh@40 -- # local part_no=2 00:04:56.705 05:45:18 -- setup/common.sh@41 -- # local size=1073741824 00:04:56.705 05:45:18 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:56.705 05:45:18 -- setup/common.sh@44 -- # parts=() 00:04:56.705 05:45:18 -- setup/common.sh@44 -- # local parts 00:04:56.705 05:45:18 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:56.705 05:45:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:56.705 05:45:18 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:56.705 05:45:18 -- setup/common.sh@46 -- # (( part++ )) 00:04:56.705 05:45:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:56.705 05:45:18 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:56.705 05:45:18 -- setup/common.sh@46 -- # (( part++ )) 00:04:56.705 05:45:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:56.705 05:45:18 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:56.705 05:45:18 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:56.705 05:45:18 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:57.641 Creating new GPT entries in memory. 00:04:57.641 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:57.641 other utilities. 00:04:57.641 05:45:19 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:57.641 05:45:19 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:57.641 05:45:19 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:57.641 05:45:19 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:57.641 05:45:19 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:58.577 Creating new GPT entries in memory. 00:04:58.577 The operation has completed successfully. 00:04:58.577 05:45:20 -- setup/common.sh@57 -- # (( part++ )) 00:04:58.577 05:45:20 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:58.577 05:45:20 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:58.577 05:45:20 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:58.577 05:45:20 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:59.953 The operation has completed successfully. 00:04:59.953 05:45:21 -- setup/common.sh@57 -- # (( part++ )) 00:04:59.953 05:45:21 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:59.953 05:45:21 -- setup/common.sh@62 -- # wait 64320 00:04:59.953 05:45:21 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:59.953 05:45:21 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.953 05:45:21 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:59.953 05:45:21 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:59.953 05:45:21 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:59.953 05:45:21 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:59.953 05:45:21 -- setup/devices.sh@161 -- # break 00:04:59.953 05:45:21 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:59.953 05:45:21 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:59.953 05:45:21 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:59.953 05:45:21 -- setup/devices.sh@166 -- # dm=dm-0 00:04:59.953 05:45:21 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:59.953 05:45:21 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:59.953 05:45:21 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.953 05:45:21 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:59.953 05:45:21 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.953 05:45:21 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:59.953 05:45:21 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:59.953 05:45:21 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.953 05:45:21 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:59.953 05:45:21 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:59.953 05:45:21 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:59.953 05:45:21 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.953 05:45:21 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:59.953 05:45:21 -- setup/devices.sh@53 -- # local found=0 00:04:59.953 05:45:21 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:59.953 05:45:21 -- setup/devices.sh@56 -- # : 00:04:59.953 05:45:21 -- setup/devices.sh@59 -- # local pci status 00:04:59.953 05:45:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.953 05:45:21 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:59.953 05:45:21 -- setup/devices.sh@47 -- # setup output config 00:04:59.954 05:45:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.954 05:45:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:59.954 05:45:21 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:59.954 05:45:21 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:59.954 05:45:21 -- setup/devices.sh@63 -- # found=1 00:04:59.954 05:45:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.954 05:45:21 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:59.954 05:45:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.212 05:45:21 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:00.212 05:45:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.471 05:45:21 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:00.471 05:45:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.471 05:45:21 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:00.471 05:45:21 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:00.471 05:45:21 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.471 05:45:21 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:00.471 05:45:21 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:00.471 05:45:21 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.471 05:45:21 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:00.471 05:45:21 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:00.471 05:45:21 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:00.471 05:45:21 -- setup/devices.sh@50 -- # local mount_point= 00:05:00.471 05:45:21 -- setup/devices.sh@51 -- # local test_file= 00:05:00.471 05:45:21 -- setup/devices.sh@53 -- # local found=0 00:05:00.471 05:45:21 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:00.471 05:45:21 -- setup/devices.sh@59 -- # local pci status 00:05:00.471 05:45:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.471 05:45:21 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:00.471 05:45:21 -- setup/devices.sh@47 -- # setup output config 00:05:00.471 05:45:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.471 05:45:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:00.730 05:45:22 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:00.730 05:45:22 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:00.730 05:45:22 -- setup/devices.sh@63 -- # found=1 00:05:00.730 05:45:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.730 05:45:22 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:00.730 05:45:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.989 05:45:22 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:00.989 05:45:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.989 05:45:22 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:00.989 05:45:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.989 05:45:22 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:00.989 05:45:22 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:00.989 05:45:22 -- setup/devices.sh@68 -- # return 0 00:05:00.989 05:45:22 -- setup/devices.sh@187 -- # cleanup_dm 00:05:00.989 05:45:22 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:01.248 05:45:22 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:01.248 05:45:22 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:01.248 05:45:22 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.248 05:45:22 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:01.248 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:01.248 05:45:22 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:01.248 05:45:22 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:01.248 00:05:01.248 real 0m4.561s 00:05:01.248 user 0m0.702s 00:05:01.248 sys 0m0.783s 00:05:01.248 05:45:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:01.248 ************************************ 00:05:01.248 END TEST dm_mount 00:05:01.248 05:45:22 -- common/autotest_common.sh@10 -- # set +x 00:05:01.248 ************************************ 00:05:01.248 05:45:22 -- setup/devices.sh@1 -- # cleanup 00:05:01.248 05:45:22 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:01.248 05:45:22 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.248 05:45:22 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.248 05:45:22 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:01.248 05:45:22 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:01.248 05:45:22 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:01.507 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:01.507 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:01.507 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:01.507 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:01.507 05:45:23 -- setup/devices.sh@12 -- # cleanup_dm 00:05:01.507 05:45:23 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:01.507 05:45:23 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:01.507 05:45:23 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.507 05:45:23 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:01.507 05:45:23 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:01.507 05:45:23 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:01.507 00:05:01.507 real 0m10.620s 00:05:01.507 user 0m2.435s 00:05:01.507 sys 0m2.510s 00:05:01.507 ************************************ 00:05:01.507 END TEST devices 00:05:01.507 ************************************ 00:05:01.507 05:45:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:01.507 05:45:23 -- common/autotest_common.sh@10 -- # set +x 00:05:01.507 00:05:01.507 real 0m22.149s 00:05:01.507 user 0m7.712s 00:05:01.507 sys 0m8.818s 00:05:01.507 05:45:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:01.507 ************************************ 00:05:01.507 END TEST setup.sh 00:05:01.507 ************************************ 00:05:01.507 05:45:23 -- common/autotest_common.sh@10 -- # set +x 00:05:01.507 05:45:23 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:01.766 Hugepages 00:05:01.766 node hugesize free / total 00:05:01.766 node0 1048576kB 0 / 0 00:05:01.766 node0 2048kB 2048 / 2048 00:05:01.766 00:05:01.766 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:01.766 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:01.766 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:02.025 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:02.025 05:45:23 -- spdk/autotest.sh@128 -- # uname -s 00:05:02.025 05:45:23 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:02.025 05:45:23 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:02.025 05:45:23 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:02.592 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:02.592 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.851 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.851 05:45:24 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:03.786 05:45:25 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:03.786 05:45:25 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:03.786 05:45:25 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:03.786 05:45:25 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:03.786 05:45:25 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:03.786 05:45:25 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:03.786 05:45:25 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:03.786 05:45:25 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:03.786 05:45:25 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:03.786 05:45:25 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:03.786 05:45:25 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:03.787 05:45:25 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:04.045 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.303 Waiting for block devices as requested 00:05:04.303 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:04.303 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:04.303 05:45:25 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:04.303 05:45:25 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:04.303 05:45:25 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:04.303 05:45:25 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:04.303 05:45:25 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:04.303 05:45:25 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:04.303 05:45:25 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:04.303 05:45:25 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:04.303 05:45:25 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:04.303 05:45:25 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:04.303 05:45:25 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:04.303 05:45:25 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:04.303 05:45:25 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:04.303 05:45:25 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:04.303 05:45:25 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:04.303 05:45:25 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:04.303 05:45:25 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:04.303 05:45:25 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:04.303 05:45:25 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:04.562 05:45:25 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:04.562 05:45:25 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:04.562 05:45:25 -- common/autotest_common.sh@1552 -- # continue 00:05:04.562 05:45:25 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:04.562 05:45:25 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:04.562 05:45:25 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:05:04.562 05:45:25 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:04.562 05:45:25 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:04.562 05:45:25 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:04.562 05:45:25 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:04.562 05:45:25 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:05:04.562 05:45:25 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:05:04.562 05:45:25 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:05:04.562 05:45:25 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:04.563 05:45:25 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:04.563 05:45:25 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:04.563 05:45:25 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:04.563 05:45:25 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:04.563 05:45:25 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:04.563 05:45:25 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:04.563 05:45:25 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:05:04.563 05:45:25 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:04.563 05:45:25 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:04.563 05:45:25 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:04.563 05:45:25 -- common/autotest_common.sh@1552 -- # continue 00:05:04.563 05:45:25 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:04.563 05:45:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:04.563 05:45:25 -- common/autotest_common.sh@10 -- # set +x 00:05:04.563 05:45:26 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:04.563 05:45:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:04.563 05:45:26 -- common/autotest_common.sh@10 -- # set +x 00:05:04.563 05:45:26 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:05.130 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:05.130 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:05.389 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:05.389 05:45:26 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:05.389 05:45:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:05.389 05:45:26 -- common/autotest_common.sh@10 -- # set +x 00:05:05.389 05:45:26 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:05.389 05:45:26 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:05.389 05:45:26 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:05.389 05:45:26 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:05.389 05:45:26 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:05.389 05:45:26 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:05.389 05:45:26 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:05.389 05:45:26 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:05.389 05:45:26 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:05.389 05:45:26 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:05.389 05:45:26 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:05.389 05:45:26 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:05.389 05:45:26 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:05.389 05:45:26 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:05.389 05:45:26 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:05.389 05:45:26 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:05.389 05:45:26 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:05.389 05:45:26 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:05.389 05:45:26 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:05.389 05:45:26 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:05.389 05:45:26 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:05.389 05:45:26 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:05.389 05:45:26 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:05.389 05:45:26 -- common/autotest_common.sh@1588 -- # return 0 00:05:05.389 05:45:26 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:05.389 05:45:26 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:05.390 05:45:26 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:05.390 05:45:26 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:05.390 05:45:26 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:05.390 05:45:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:05.390 05:45:26 -- common/autotest_common.sh@10 -- # set +x 00:05:05.390 05:45:26 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:05.390 05:45:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.390 05:45:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.390 05:45:26 -- common/autotest_common.sh@10 -- # set +x 00:05:05.390 ************************************ 00:05:05.390 START TEST env 00:05:05.390 ************************************ 00:05:05.390 05:45:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:05.649 * Looking for test storage... 00:05:05.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:05.649 05:45:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:05.649 05:45:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:05.649 05:45:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:05.649 05:45:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:05.649 05:45:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:05.649 05:45:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:05.649 05:45:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:05.649 05:45:27 -- scripts/common.sh@335 -- # IFS=.-: 00:05:05.649 05:45:27 -- scripts/common.sh@335 -- # read -ra ver1 00:05:05.649 05:45:27 -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.649 05:45:27 -- scripts/common.sh@336 -- # read -ra ver2 00:05:05.649 05:45:27 -- scripts/common.sh@337 -- # local 'op=<' 00:05:05.649 05:45:27 -- scripts/common.sh@339 -- # ver1_l=2 00:05:05.649 05:45:27 -- scripts/common.sh@340 -- # ver2_l=1 00:05:05.649 05:45:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:05.649 05:45:27 -- scripts/common.sh@343 -- # case "$op" in 00:05:05.649 05:45:27 -- scripts/common.sh@344 -- # : 1 00:05:05.649 05:45:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:05.649 05:45:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.649 05:45:27 -- scripts/common.sh@364 -- # decimal 1 00:05:05.649 05:45:27 -- scripts/common.sh@352 -- # local d=1 00:05:05.649 05:45:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.649 05:45:27 -- scripts/common.sh@354 -- # echo 1 00:05:05.649 05:45:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:05.649 05:45:27 -- scripts/common.sh@365 -- # decimal 2 00:05:05.649 05:45:27 -- scripts/common.sh@352 -- # local d=2 00:05:05.649 05:45:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.649 05:45:27 -- scripts/common.sh@354 -- # echo 2 00:05:05.649 05:45:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:05.649 05:45:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:05.649 05:45:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:05.649 05:45:27 -- scripts/common.sh@367 -- # return 0 00:05:05.649 05:45:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.649 05:45:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:05.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.649 --rc genhtml_branch_coverage=1 00:05:05.649 --rc genhtml_function_coverage=1 00:05:05.649 --rc genhtml_legend=1 00:05:05.649 --rc geninfo_all_blocks=1 00:05:05.649 --rc geninfo_unexecuted_blocks=1 00:05:05.649 00:05:05.649 ' 00:05:05.649 05:45:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:05.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.649 --rc genhtml_branch_coverage=1 00:05:05.649 --rc genhtml_function_coverage=1 00:05:05.649 --rc genhtml_legend=1 00:05:05.649 --rc geninfo_all_blocks=1 00:05:05.649 --rc geninfo_unexecuted_blocks=1 00:05:05.649 00:05:05.649 ' 00:05:05.649 05:45:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:05.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.649 --rc genhtml_branch_coverage=1 00:05:05.649 --rc genhtml_function_coverage=1 00:05:05.649 --rc genhtml_legend=1 00:05:05.649 --rc geninfo_all_blocks=1 00:05:05.649 --rc geninfo_unexecuted_blocks=1 00:05:05.649 00:05:05.649 ' 00:05:05.649 05:45:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:05.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.649 --rc genhtml_branch_coverage=1 00:05:05.649 --rc genhtml_function_coverage=1 00:05:05.649 --rc genhtml_legend=1 00:05:05.649 --rc geninfo_all_blocks=1 00:05:05.649 --rc geninfo_unexecuted_blocks=1 00:05:05.649 00:05:05.649 ' 00:05:05.649 05:45:27 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:05.649 05:45:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.649 05:45:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.649 05:45:27 -- common/autotest_common.sh@10 -- # set +x 00:05:05.649 ************************************ 00:05:05.649 START TEST env_memory 00:05:05.649 ************************************ 00:05:05.649 05:45:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:05.649 00:05:05.649 00:05:05.649 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.649 http://cunit.sourceforge.net/ 00:05:05.649 00:05:05.649 00:05:05.649 Suite: memory 00:05:05.649 Test: alloc and free memory map ...[2024-12-15 05:45:27.229113] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:05.649 passed 00:05:05.649 Test: mem map translation ...[2024-12-15 05:45:27.260063] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:05.649 [2024-12-15 05:45:27.260288] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:05.649 [2024-12-15 05:45:27.260475] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:05.649 [2024-12-15 05:45:27.260616] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:05.908 passed 00:05:05.908 Test: mem map registration ...[2024-12-15 05:45:27.324626] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:05.908 [2024-12-15 05:45:27.324665] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:05.908 passed 00:05:05.908 Test: mem map adjacent registrations ...passed 00:05:05.908 00:05:05.908 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.908 suites 1 1 n/a 0 0 00:05:05.908 tests 4 4 4 0 0 00:05:05.908 asserts 152 152 152 0 n/a 00:05:05.908 00:05:05.908 Elapsed time = 0.212 seconds 00:05:05.908 00:05:05.908 real 0m0.231s 00:05:05.908 user 0m0.211s 00:05:05.908 sys 0m0.014s 00:05:05.908 05:45:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:05.908 ************************************ 00:05:05.908 END TEST env_memory 00:05:05.908 ************************************ 00:05:05.908 05:45:27 -- common/autotest_common.sh@10 -- # set +x 00:05:05.908 05:45:27 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:05.908 05:45:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.908 05:45:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.908 05:45:27 -- common/autotest_common.sh@10 -- # set +x 00:05:05.908 ************************************ 00:05:05.908 START TEST env_vtophys 00:05:05.908 ************************************ 00:05:05.908 05:45:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:05.908 EAL: lib.eal log level changed from notice to debug 00:05:05.908 EAL: Detected lcore 0 as core 0 on socket 0 00:05:05.908 EAL: Detected lcore 1 as core 0 on socket 0 00:05:05.908 EAL: Detected lcore 2 as core 0 on socket 0 00:05:05.908 EAL: Detected lcore 3 as core 0 on socket 0 00:05:05.908 EAL: Detected lcore 4 as core 0 on socket 0 00:05:05.908 EAL: Detected lcore 5 as core 0 on socket 0 00:05:05.908 EAL: Detected lcore 6 as core 0 on socket 0 00:05:05.908 EAL: Detected lcore 7 as core 0 on socket 0 00:05:05.908 EAL: Detected lcore 8 as core 0 on socket 0 00:05:05.908 EAL: Detected lcore 9 as core 0 on socket 0 00:05:05.908 EAL: Maximum logical cores by configuration: 128 00:05:05.908 EAL: Detected CPU lcores: 10 00:05:05.908 EAL: Detected NUMA nodes: 1 00:05:05.908 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:05.908 EAL: Detected shared linkage of DPDK 00:05:05.908 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:05.909 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:05.909 EAL: Registered [vdev] bus. 00:05:05.909 EAL: bus.vdev log level changed from disabled to notice 00:05:05.909 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:05.909 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:05.909 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:05.909 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:05.909 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:05.909 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:05.909 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:05.909 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:05.909 EAL: No shared files mode enabled, IPC will be disabled 00:05:05.909 EAL: No shared files mode enabled, IPC is disabled 00:05:05.909 EAL: Selected IOVA mode 'PA' 00:05:05.909 EAL: Probing VFIO support... 00:05:05.909 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:05.909 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:05.909 EAL: Ask a virtual area of 0x2e000 bytes 00:05:05.909 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:05.909 EAL: Setting up physically contiguous memory... 00:05:05.909 EAL: Setting maximum number of open files to 524288 00:05:05.909 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:05.909 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:05.909 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.909 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:05.909 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.909 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.909 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:05.909 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:05.909 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.909 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:05.909 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.909 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.909 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:05.909 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:05.909 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.909 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:05.909 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.909 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.909 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:05.909 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:05.909 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.909 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:05.909 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.909 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.909 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:05.909 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:05.909 EAL: Hugepages will be freed exactly as allocated. 00:05:05.909 EAL: No shared files mode enabled, IPC is disabled 00:05:05.909 EAL: No shared files mode enabled, IPC is disabled 00:05:06.168 EAL: TSC frequency is ~2200000 KHz 00:05:06.168 EAL: Main lcore 0 is ready (tid=7f2ad746aa00;cpuset=[0]) 00:05:06.168 EAL: Trying to obtain current memory policy. 00:05:06.168 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.168 EAL: Restoring previous memory policy: 0 00:05:06.168 EAL: request: mp_malloc_sync 00:05:06.168 EAL: No shared files mode enabled, IPC is disabled 00:05:06.168 EAL: Heap on socket 0 was expanded by 2MB 00:05:06.168 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:06.168 EAL: No shared files mode enabled, IPC is disabled 00:05:06.168 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:06.168 EAL: Mem event callback 'spdk:(nil)' registered 00:05:06.168 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:06.168 00:05:06.168 00:05:06.168 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.168 http://cunit.sourceforge.net/ 00:05:06.168 00:05:06.168 00:05:06.168 Suite: components_suite 00:05:06.168 Test: vtophys_malloc_test ...passed 00:05:06.168 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:06.168 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.168 EAL: Restoring previous memory policy: 4 00:05:06.168 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.168 EAL: request: mp_malloc_sync 00:05:06.168 EAL: No shared files mode enabled, IPC is disabled 00:05:06.168 EAL: Heap on socket 0 was expanded by 4MB 00:05:06.168 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.168 EAL: request: mp_malloc_sync 00:05:06.168 EAL: No shared files mode enabled, IPC is disabled 00:05:06.168 EAL: Heap on socket 0 was shrunk by 4MB 00:05:06.168 EAL: Trying to obtain current memory policy. 00:05:06.168 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.168 EAL: Restoring previous memory policy: 4 00:05:06.168 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.168 EAL: request: mp_malloc_sync 00:05:06.168 EAL: No shared files mode enabled, IPC is disabled 00:05:06.168 EAL: Heap on socket 0 was expanded by 6MB 00:05:06.168 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.168 EAL: request: mp_malloc_sync 00:05:06.168 EAL: No shared files mode enabled, IPC is disabled 00:05:06.168 EAL: Heap on socket 0 was shrunk by 6MB 00:05:06.168 EAL: Trying to obtain current memory policy. 00:05:06.168 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.168 EAL: Restoring previous memory policy: 4 00:05:06.168 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.168 EAL: request: mp_malloc_sync 00:05:06.168 EAL: No shared files mode enabled, IPC is disabled 00:05:06.168 EAL: Heap on socket 0 was expanded by 10MB 00:05:06.168 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.168 EAL: request: mp_malloc_sync 00:05:06.168 EAL: No shared files mode enabled, IPC is disabled 00:05:06.168 EAL: Heap on socket 0 was shrunk by 10MB 00:05:06.168 EAL: Trying to obtain current memory policy. 00:05:06.168 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.168 EAL: Restoring previous memory policy: 4 00:05:06.168 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.168 EAL: request: mp_malloc_sync 00:05:06.168 EAL: No shared files mode enabled, IPC is disabled 00:05:06.168 EAL: Heap on socket 0 was expanded by 18MB 00:05:06.168 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.168 EAL: request: mp_malloc_sync 00:05:06.168 EAL: No shared files mode enabled, IPC is disabled 00:05:06.168 EAL: Heap on socket 0 was shrunk by 18MB 00:05:06.169 EAL: Trying to obtain current memory policy. 00:05:06.169 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.169 EAL: Restoring previous memory policy: 4 00:05:06.169 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.169 EAL: request: mp_malloc_sync 00:05:06.169 EAL: No shared files mode enabled, IPC is disabled 00:05:06.169 EAL: Heap on socket 0 was expanded by 34MB 00:05:06.169 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.169 EAL: request: mp_malloc_sync 00:05:06.169 EAL: No shared files mode enabled, IPC is disabled 00:05:06.169 EAL: Heap on socket 0 was shrunk by 34MB 00:05:06.169 EAL: Trying to obtain current memory policy. 00:05:06.169 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.169 EAL: Restoring previous memory policy: 4 00:05:06.169 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.169 EAL: request: mp_malloc_sync 00:05:06.169 EAL: No shared files mode enabled, IPC is disabled 00:05:06.169 EAL: Heap on socket 0 was expanded by 66MB 00:05:06.169 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.169 EAL: request: mp_malloc_sync 00:05:06.169 EAL: No shared files mode enabled, IPC is disabled 00:05:06.169 EAL: Heap on socket 0 was shrunk by 66MB 00:05:06.169 EAL: Trying to obtain current memory policy. 00:05:06.169 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.169 EAL: Restoring previous memory policy: 4 00:05:06.169 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.169 EAL: request: mp_malloc_sync 00:05:06.169 EAL: No shared files mode enabled, IPC is disabled 00:05:06.169 EAL: Heap on socket 0 was expanded by 130MB 00:05:06.169 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.169 EAL: request: mp_malloc_sync 00:05:06.169 EAL: No shared files mode enabled, IPC is disabled 00:05:06.169 EAL: Heap on socket 0 was shrunk by 130MB 00:05:06.169 EAL: Trying to obtain current memory policy. 00:05:06.169 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.169 EAL: Restoring previous memory policy: 4 00:05:06.169 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.169 EAL: request: mp_malloc_sync 00:05:06.169 EAL: No shared files mode enabled, IPC is disabled 00:05:06.169 EAL: Heap on socket 0 was expanded by 258MB 00:05:06.169 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.169 EAL: request: mp_malloc_sync 00:05:06.169 EAL: No shared files mode enabled, IPC is disabled 00:05:06.169 EAL: Heap on socket 0 was shrunk by 258MB 00:05:06.169 EAL: Trying to obtain current memory policy. 00:05:06.169 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.428 EAL: Restoring previous memory policy: 4 00:05:06.428 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.428 EAL: request: mp_malloc_sync 00:05:06.428 EAL: No shared files mode enabled, IPC is disabled 00:05:06.428 EAL: Heap on socket 0 was expanded by 514MB 00:05:06.428 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.428 EAL: request: mp_malloc_sync 00:05:06.428 EAL: No shared files mode enabled, IPC is disabled 00:05:06.428 EAL: Heap on socket 0 was shrunk by 514MB 00:05:06.428 EAL: Trying to obtain current memory policy. 00:05:06.428 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.688 EAL: Restoring previous memory policy: 4 00:05:06.688 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.688 EAL: request: mp_malloc_sync 00:05:06.688 EAL: No shared files mode enabled, IPC is disabled 00:05:06.688 EAL: Heap on socket 0 was expanded by 1026MB 00:05:06.688 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.947 passed 00:05:06.947 00:05:06.947 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.947 suites 1 1 n/a 0 0 00:05:06.947 tests 2 2 2 0 0 00:05:06.947 asserts 5358 5358 5358 0 n/a 00:05:06.947 00:05:06.947 Elapsed time = 0.683 seconds 00:05:06.947 EAL: request: mp_malloc_sync 00:05:06.947 EAL: No shared files mode enabled, IPC is disabled 00:05:06.947 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:06.947 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.947 EAL: request: mp_malloc_sync 00:05:06.947 EAL: No shared files mode enabled, IPC is disabled 00:05:06.947 EAL: Heap on socket 0 was shrunk by 2MB 00:05:06.947 EAL: No shared files mode enabled, IPC is disabled 00:05:06.947 EAL: No shared files mode enabled, IPC is disabled 00:05:06.947 EAL: No shared files mode enabled, IPC is disabled 00:05:06.947 ************************************ 00:05:06.947 END TEST env_vtophys 00:05:06.947 ************************************ 00:05:06.947 00:05:06.947 real 0m0.877s 00:05:06.947 user 0m0.440s 00:05:06.947 sys 0m0.303s 00:05:06.947 05:45:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:06.947 05:45:28 -- common/autotest_common.sh@10 -- # set +x 00:05:06.947 05:45:28 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:06.947 05:45:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:06.947 05:45:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.947 05:45:28 -- common/autotest_common.sh@10 -- # set +x 00:05:06.947 ************************************ 00:05:06.947 START TEST env_pci 00:05:06.947 ************************************ 00:05:06.947 05:45:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:06.947 00:05:06.947 00:05:06.947 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.947 http://cunit.sourceforge.net/ 00:05:06.947 00:05:06.947 00:05:06.947 Suite: pci 00:05:06.947 Test: pci_hook ...[2024-12-15 05:45:28.405259] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 65453 has claimed it 00:05:06.947 passed 00:05:06.947 00:05:06.947 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.947 suites 1 1 n/a 0 0 00:05:06.947 tests 1 1 1 0 0 00:05:06.947 asserts 25 25 25 0 n/a 00:05:06.947 00:05:06.947 Elapsed time = 0.002 seconds 00:05:06.947 EAL: Cannot find device (10000:00:01.0) 00:05:06.947 EAL: Failed to attach device on primary process 00:05:06.947 00:05:06.947 real 0m0.018s 00:05:06.947 user 0m0.006s 00:05:06.947 sys 0m0.011s 00:05:06.947 05:45:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:06.947 ************************************ 00:05:06.947 END TEST env_pci 00:05:06.947 ************************************ 00:05:06.947 05:45:28 -- common/autotest_common.sh@10 -- # set +x 00:05:06.947 05:45:28 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:06.947 05:45:28 -- env/env.sh@15 -- # uname 00:05:06.947 05:45:28 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:06.947 05:45:28 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:06.947 05:45:28 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:06.947 05:45:28 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:06.947 05:45:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.947 05:45:28 -- common/autotest_common.sh@10 -- # set +x 00:05:06.947 ************************************ 00:05:06.947 START TEST env_dpdk_post_init 00:05:06.947 ************************************ 00:05:06.947 05:45:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:06.947 EAL: Detected CPU lcores: 10 00:05:06.947 EAL: Detected NUMA nodes: 1 00:05:06.947 EAL: Detected shared linkage of DPDK 00:05:06.947 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:06.947 EAL: Selected IOVA mode 'PA' 00:05:07.208 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:07.208 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:07.208 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:07.208 Starting DPDK initialization... 00:05:07.208 Starting SPDK post initialization... 00:05:07.208 SPDK NVMe probe 00:05:07.208 Attaching to 0000:00:06.0 00:05:07.208 Attaching to 0000:00:07.0 00:05:07.208 Attached to 0000:00:06.0 00:05:07.208 Attached to 0000:00:07.0 00:05:07.208 Cleaning up... 00:05:07.208 ************************************ 00:05:07.208 END TEST env_dpdk_post_init 00:05:07.208 ************************************ 00:05:07.208 00:05:07.208 real 0m0.175s 00:05:07.208 user 0m0.043s 00:05:07.208 sys 0m0.031s 00:05:07.208 05:45:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:07.208 05:45:28 -- common/autotest_common.sh@10 -- # set +x 00:05:07.208 05:45:28 -- env/env.sh@26 -- # uname 00:05:07.208 05:45:28 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:07.208 05:45:28 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:07.208 05:45:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.208 05:45:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.208 05:45:28 -- common/autotest_common.sh@10 -- # set +x 00:05:07.208 ************************************ 00:05:07.208 START TEST env_mem_callbacks 00:05:07.208 ************************************ 00:05:07.208 05:45:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:07.208 EAL: Detected CPU lcores: 10 00:05:07.208 EAL: Detected NUMA nodes: 1 00:05:07.208 EAL: Detected shared linkage of DPDK 00:05:07.208 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:07.208 EAL: Selected IOVA mode 'PA' 00:05:07.208 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:07.208 00:05:07.208 00:05:07.208 CUnit - A unit testing framework for C - Version 2.1-3 00:05:07.208 http://cunit.sourceforge.net/ 00:05:07.208 00:05:07.208 00:05:07.208 Suite: memory 00:05:07.208 Test: test ... 00:05:07.208 register 0x200000200000 2097152 00:05:07.208 malloc 3145728 00:05:07.208 register 0x200000400000 4194304 00:05:07.208 buf 0x200000500000 len 3145728 PASSED 00:05:07.208 malloc 64 00:05:07.208 buf 0x2000004fff40 len 64 PASSED 00:05:07.208 malloc 4194304 00:05:07.208 register 0x200000800000 6291456 00:05:07.208 buf 0x200000a00000 len 4194304 PASSED 00:05:07.208 free 0x200000500000 3145728 00:05:07.208 free 0x2000004fff40 64 00:05:07.208 unregister 0x200000400000 4194304 PASSED 00:05:07.208 free 0x200000a00000 4194304 00:05:07.208 unregister 0x200000800000 6291456 PASSED 00:05:07.208 malloc 8388608 00:05:07.208 register 0x200000400000 10485760 00:05:07.208 buf 0x200000600000 len 8388608 PASSED 00:05:07.208 free 0x200000600000 8388608 00:05:07.208 unregister 0x200000400000 10485760 PASSED 00:05:07.208 passed 00:05:07.208 00:05:07.208 Run Summary: Type Total Ran Passed Failed Inactive 00:05:07.208 suites 1 1 n/a 0 0 00:05:07.208 tests 1 1 1 0 0 00:05:07.208 asserts 15 15 15 0 n/a 00:05:07.208 00:05:07.208 Elapsed time = 0.008 seconds 00:05:07.208 00:05:07.208 real 0m0.139s 00:05:07.208 user 0m0.016s 00:05:07.208 sys 0m0.022s 00:05:07.208 05:45:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:07.208 05:45:28 -- common/autotest_common.sh@10 -- # set +x 00:05:07.208 ************************************ 00:05:07.208 END TEST env_mem_callbacks 00:05:07.208 ************************************ 00:05:07.486 00:05:07.486 real 0m1.894s 00:05:07.486 user 0m0.920s 00:05:07.486 sys 0m0.621s 00:05:07.486 05:45:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:07.486 05:45:28 -- common/autotest_common.sh@10 -- # set +x 00:05:07.486 ************************************ 00:05:07.486 END TEST env 00:05:07.486 ************************************ 00:05:07.486 05:45:28 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:07.486 05:45:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.486 05:45:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.486 05:45:28 -- common/autotest_common.sh@10 -- # set +x 00:05:07.486 ************************************ 00:05:07.486 START TEST rpc 00:05:07.486 ************************************ 00:05:07.486 05:45:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:07.486 * Looking for test storage... 00:05:07.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:07.486 05:45:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:07.486 05:45:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:07.486 05:45:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:07.486 05:45:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:07.486 05:45:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:07.486 05:45:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:07.486 05:45:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:07.486 05:45:29 -- scripts/common.sh@335 -- # IFS=.-: 00:05:07.486 05:45:29 -- scripts/common.sh@335 -- # read -ra ver1 00:05:07.486 05:45:29 -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.486 05:45:29 -- scripts/common.sh@336 -- # read -ra ver2 00:05:07.486 05:45:29 -- scripts/common.sh@337 -- # local 'op=<' 00:05:07.486 05:45:29 -- scripts/common.sh@339 -- # ver1_l=2 00:05:07.486 05:45:29 -- scripts/common.sh@340 -- # ver2_l=1 00:05:07.486 05:45:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:07.486 05:45:29 -- scripts/common.sh@343 -- # case "$op" in 00:05:07.486 05:45:29 -- scripts/common.sh@344 -- # : 1 00:05:07.486 05:45:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:07.486 05:45:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.486 05:45:29 -- scripts/common.sh@364 -- # decimal 1 00:05:07.486 05:45:29 -- scripts/common.sh@352 -- # local d=1 00:05:07.486 05:45:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.486 05:45:29 -- scripts/common.sh@354 -- # echo 1 00:05:07.486 05:45:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:07.486 05:45:29 -- scripts/common.sh@365 -- # decimal 2 00:05:07.486 05:45:29 -- scripts/common.sh@352 -- # local d=2 00:05:07.486 05:45:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.486 05:45:29 -- scripts/common.sh@354 -- # echo 2 00:05:07.486 05:45:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:07.486 05:45:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:07.486 05:45:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:07.486 05:45:29 -- scripts/common.sh@367 -- # return 0 00:05:07.486 05:45:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.486 05:45:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:07.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.486 --rc genhtml_branch_coverage=1 00:05:07.486 --rc genhtml_function_coverage=1 00:05:07.486 --rc genhtml_legend=1 00:05:07.486 --rc geninfo_all_blocks=1 00:05:07.486 --rc geninfo_unexecuted_blocks=1 00:05:07.486 00:05:07.486 ' 00:05:07.486 05:45:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:07.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.486 --rc genhtml_branch_coverage=1 00:05:07.486 --rc genhtml_function_coverage=1 00:05:07.486 --rc genhtml_legend=1 00:05:07.486 --rc geninfo_all_blocks=1 00:05:07.486 --rc geninfo_unexecuted_blocks=1 00:05:07.486 00:05:07.486 ' 00:05:07.486 05:45:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:07.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.486 --rc genhtml_branch_coverage=1 00:05:07.486 --rc genhtml_function_coverage=1 00:05:07.486 --rc genhtml_legend=1 00:05:07.486 --rc geninfo_all_blocks=1 00:05:07.486 --rc geninfo_unexecuted_blocks=1 00:05:07.486 00:05:07.486 ' 00:05:07.486 05:45:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:07.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.486 --rc genhtml_branch_coverage=1 00:05:07.486 --rc genhtml_function_coverage=1 00:05:07.486 --rc genhtml_legend=1 00:05:07.486 --rc geninfo_all_blocks=1 00:05:07.486 --rc geninfo_unexecuted_blocks=1 00:05:07.486 00:05:07.486 ' 00:05:07.486 05:45:29 -- rpc/rpc.sh@65 -- # spdk_pid=65570 00:05:07.486 05:45:29 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.486 05:45:29 -- rpc/rpc.sh@67 -- # waitforlisten 65570 00:05:07.486 05:45:29 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:07.486 05:45:29 -- common/autotest_common.sh@829 -- # '[' -z 65570 ']' 00:05:07.486 05:45:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.486 05:45:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.486 05:45:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.486 05:45:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.486 05:45:29 -- common/autotest_common.sh@10 -- # set +x 00:05:07.757 [2024-12-15 05:45:29.169929] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:07.757 [2024-12-15 05:45:29.170039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65570 ] 00:05:07.757 [2024-12-15 05:45:29.310550] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.757 [2024-12-15 05:45:29.350457] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:07.757 [2024-12-15 05:45:29.350643] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:07.757 [2024-12-15 05:45:29.350670] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 65570' to capture a snapshot of events at runtime. 00:05:07.757 [2024-12-15 05:45:29.350690] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid65570 for offline analysis/debug. 00:05:07.757 [2024-12-15 05:45:29.350720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.694 05:45:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.694 05:45:30 -- common/autotest_common.sh@862 -- # return 0 00:05:08.694 05:45:30 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:08.694 05:45:30 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:08.694 05:45:30 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:08.694 05:45:30 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:08.694 05:45:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.694 05:45:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.694 05:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:08.694 ************************************ 00:05:08.694 START TEST rpc_integrity 00:05:08.694 ************************************ 00:05:08.694 05:45:30 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:08.694 05:45:30 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:08.694 05:45:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.694 05:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:08.694 05:45:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.694 05:45:30 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:08.694 05:45:30 -- rpc/rpc.sh@13 -- # jq length 00:05:08.694 05:45:30 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:08.694 05:45:30 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:08.694 05:45:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.694 05:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:08.694 05:45:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.694 05:45:30 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:08.694 05:45:30 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:08.694 05:45:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.694 05:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:08.694 05:45:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.694 05:45:30 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:08.694 { 00:05:08.694 "name": "Malloc0", 00:05:08.694 "aliases": [ 00:05:08.694 "83439b43-8958-4fe2-83ff-cf47d62c7e30" 00:05:08.694 ], 00:05:08.694 "product_name": "Malloc disk", 00:05:08.694 "block_size": 512, 00:05:08.694 "num_blocks": 16384, 00:05:08.694 "uuid": "83439b43-8958-4fe2-83ff-cf47d62c7e30", 00:05:08.694 "assigned_rate_limits": { 00:05:08.694 "rw_ios_per_sec": 0, 00:05:08.694 "rw_mbytes_per_sec": 0, 00:05:08.694 "r_mbytes_per_sec": 0, 00:05:08.694 "w_mbytes_per_sec": 0 00:05:08.694 }, 00:05:08.694 "claimed": false, 00:05:08.694 "zoned": false, 00:05:08.694 "supported_io_types": { 00:05:08.694 "read": true, 00:05:08.694 "write": true, 00:05:08.694 "unmap": true, 00:05:08.695 "write_zeroes": true, 00:05:08.695 "flush": true, 00:05:08.695 "reset": true, 00:05:08.695 "compare": false, 00:05:08.695 "compare_and_write": false, 00:05:08.695 "abort": true, 00:05:08.695 "nvme_admin": false, 00:05:08.695 "nvme_io": false 00:05:08.695 }, 00:05:08.695 "memory_domains": [ 00:05:08.695 { 00:05:08.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.695 "dma_device_type": 2 00:05:08.695 } 00:05:08.695 ], 00:05:08.695 "driver_specific": {} 00:05:08.695 } 00:05:08.695 ]' 00:05:08.695 05:45:30 -- rpc/rpc.sh@17 -- # jq length 00:05:08.954 05:45:30 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:08.954 05:45:30 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:08.954 05:45:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.954 05:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:08.954 [2024-12-15 05:45:30.338218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:08.954 [2024-12-15 05:45:30.338293] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:08.954 [2024-12-15 05:45:30.338310] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d23030 00:05:08.954 [2024-12-15 05:45:30.338317] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:08.954 [2024-12-15 05:45:30.339825] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:08.954 [2024-12-15 05:45:30.339914] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:08.954 Passthru0 00:05:08.954 05:45:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.954 05:45:30 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:08.954 05:45:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.954 05:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:08.954 05:45:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.954 05:45:30 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:08.954 { 00:05:08.954 "name": "Malloc0", 00:05:08.954 "aliases": [ 00:05:08.954 "83439b43-8958-4fe2-83ff-cf47d62c7e30" 00:05:08.954 ], 00:05:08.954 "product_name": "Malloc disk", 00:05:08.954 "block_size": 512, 00:05:08.954 "num_blocks": 16384, 00:05:08.954 "uuid": "83439b43-8958-4fe2-83ff-cf47d62c7e30", 00:05:08.954 "assigned_rate_limits": { 00:05:08.954 "rw_ios_per_sec": 0, 00:05:08.954 "rw_mbytes_per_sec": 0, 00:05:08.954 "r_mbytes_per_sec": 0, 00:05:08.954 "w_mbytes_per_sec": 0 00:05:08.954 }, 00:05:08.954 "claimed": true, 00:05:08.954 "claim_type": "exclusive_write", 00:05:08.954 "zoned": false, 00:05:08.954 "supported_io_types": { 00:05:08.954 "read": true, 00:05:08.954 "write": true, 00:05:08.954 "unmap": true, 00:05:08.954 "write_zeroes": true, 00:05:08.954 "flush": true, 00:05:08.954 "reset": true, 00:05:08.954 "compare": false, 00:05:08.954 "compare_and_write": false, 00:05:08.954 "abort": true, 00:05:08.954 "nvme_admin": false, 00:05:08.954 "nvme_io": false 00:05:08.954 }, 00:05:08.954 "memory_domains": [ 00:05:08.954 { 00:05:08.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.954 "dma_device_type": 2 00:05:08.954 } 00:05:08.954 ], 00:05:08.954 "driver_specific": {} 00:05:08.954 }, 00:05:08.954 { 00:05:08.954 "name": "Passthru0", 00:05:08.954 "aliases": [ 00:05:08.954 "5874e090-d3e6-58c7-bb38-2b27b21652d2" 00:05:08.954 ], 00:05:08.954 "product_name": "passthru", 00:05:08.954 "block_size": 512, 00:05:08.954 "num_blocks": 16384, 00:05:08.954 "uuid": "5874e090-d3e6-58c7-bb38-2b27b21652d2", 00:05:08.954 "assigned_rate_limits": { 00:05:08.954 "rw_ios_per_sec": 0, 00:05:08.954 "rw_mbytes_per_sec": 0, 00:05:08.954 "r_mbytes_per_sec": 0, 00:05:08.954 "w_mbytes_per_sec": 0 00:05:08.954 }, 00:05:08.954 "claimed": false, 00:05:08.954 "zoned": false, 00:05:08.954 "supported_io_types": { 00:05:08.954 "read": true, 00:05:08.954 "write": true, 00:05:08.954 "unmap": true, 00:05:08.954 "write_zeroes": true, 00:05:08.954 "flush": true, 00:05:08.954 "reset": true, 00:05:08.954 "compare": false, 00:05:08.954 "compare_and_write": false, 00:05:08.954 "abort": true, 00:05:08.954 "nvme_admin": false, 00:05:08.954 "nvme_io": false 00:05:08.954 }, 00:05:08.954 "memory_domains": [ 00:05:08.954 { 00:05:08.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.954 "dma_device_type": 2 00:05:08.954 } 00:05:08.954 ], 00:05:08.954 "driver_specific": { 00:05:08.954 "passthru": { 00:05:08.954 "name": "Passthru0", 00:05:08.954 "base_bdev_name": "Malloc0" 00:05:08.954 } 00:05:08.954 } 00:05:08.954 } 00:05:08.954 ]' 00:05:08.954 05:45:30 -- rpc/rpc.sh@21 -- # jq length 00:05:08.954 05:45:30 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:08.955 05:45:30 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:08.955 05:45:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.955 05:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:08.955 05:45:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.955 05:45:30 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:08.955 05:45:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.955 05:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:08.955 05:45:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.955 05:45:30 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:08.955 05:45:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.955 05:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:08.955 05:45:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.955 05:45:30 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:08.955 05:45:30 -- rpc/rpc.sh@26 -- # jq length 00:05:08.955 05:45:30 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:08.955 00:05:08.955 real 0m0.313s 00:05:08.955 user 0m0.215s 00:05:08.955 sys 0m0.031s 00:05:08.955 05:45:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.955 05:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:08.955 ************************************ 00:05:08.955 END TEST rpc_integrity 00:05:08.955 ************************************ 00:05:08.955 05:45:30 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:08.955 05:45:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.955 05:45:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.955 05:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:08.955 ************************************ 00:05:08.955 START TEST rpc_plugins 00:05:08.955 ************************************ 00:05:08.955 05:45:30 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:08.955 05:45:30 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:08.955 05:45:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.955 05:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:08.955 05:45:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.955 05:45:30 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:08.955 05:45:30 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:08.955 05:45:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.955 05:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:08.955 05:45:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.955 05:45:30 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:08.955 { 00:05:08.955 "name": "Malloc1", 00:05:08.955 "aliases": [ 00:05:08.955 "52cf2313-affb-4682-b2b9-351e0d4d0a16" 00:05:08.955 ], 00:05:08.955 "product_name": "Malloc disk", 00:05:08.955 "block_size": 4096, 00:05:08.955 "num_blocks": 256, 00:05:08.955 "uuid": "52cf2313-affb-4682-b2b9-351e0d4d0a16", 00:05:08.955 "assigned_rate_limits": { 00:05:08.955 "rw_ios_per_sec": 0, 00:05:08.955 "rw_mbytes_per_sec": 0, 00:05:08.955 "r_mbytes_per_sec": 0, 00:05:08.955 "w_mbytes_per_sec": 0 00:05:08.955 }, 00:05:08.955 "claimed": false, 00:05:08.955 "zoned": false, 00:05:08.955 "supported_io_types": { 00:05:08.955 "read": true, 00:05:08.955 "write": true, 00:05:08.955 "unmap": true, 00:05:08.955 "write_zeroes": true, 00:05:08.955 "flush": true, 00:05:08.955 "reset": true, 00:05:08.955 "compare": false, 00:05:08.955 "compare_and_write": false, 00:05:08.955 "abort": true, 00:05:08.955 "nvme_admin": false, 00:05:08.955 "nvme_io": false 00:05:08.955 }, 00:05:08.955 "memory_domains": [ 00:05:08.955 { 00:05:08.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.955 "dma_device_type": 2 00:05:08.955 } 00:05:08.955 ], 00:05:08.955 "driver_specific": {} 00:05:08.955 } 00:05:08.955 ]' 00:05:08.955 05:45:30 -- rpc/rpc.sh@32 -- # jq length 00:05:09.214 05:45:30 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:09.214 05:45:30 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:09.214 05:45:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.214 05:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:09.214 05:45:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.214 05:45:30 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:09.214 05:45:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.214 05:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:09.214 05:45:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.214 05:45:30 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:09.214 05:45:30 -- rpc/rpc.sh@36 -- # jq length 00:05:09.214 05:45:30 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:09.214 00:05:09.214 real 0m0.149s 00:05:09.214 user 0m0.100s 00:05:09.214 sys 0m0.015s 00:05:09.214 05:45:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:09.214 05:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:09.214 ************************************ 00:05:09.214 END TEST rpc_plugins 00:05:09.214 ************************************ 00:05:09.214 05:45:30 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:09.214 05:45:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.214 05:45:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.214 05:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:09.214 ************************************ 00:05:09.214 START TEST rpc_trace_cmd_test 00:05:09.214 ************************************ 00:05:09.214 05:45:30 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:09.214 05:45:30 -- rpc/rpc.sh@40 -- # local info 00:05:09.214 05:45:30 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:09.214 05:45:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.214 05:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:09.214 05:45:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.214 05:45:30 -- rpc/rpc.sh@42 -- # info='{ 00:05:09.214 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid65570", 00:05:09.214 "tpoint_group_mask": "0x8", 00:05:09.214 "iscsi_conn": { 00:05:09.214 "mask": "0x2", 00:05:09.214 "tpoint_mask": "0x0" 00:05:09.214 }, 00:05:09.214 "scsi": { 00:05:09.214 "mask": "0x4", 00:05:09.214 "tpoint_mask": "0x0" 00:05:09.214 }, 00:05:09.214 "bdev": { 00:05:09.214 "mask": "0x8", 00:05:09.214 "tpoint_mask": "0xffffffffffffffff" 00:05:09.214 }, 00:05:09.214 "nvmf_rdma": { 00:05:09.214 "mask": "0x10", 00:05:09.215 "tpoint_mask": "0x0" 00:05:09.215 }, 00:05:09.215 "nvmf_tcp": { 00:05:09.215 "mask": "0x20", 00:05:09.215 "tpoint_mask": "0x0" 00:05:09.215 }, 00:05:09.215 "ftl": { 00:05:09.215 "mask": "0x40", 00:05:09.215 "tpoint_mask": "0x0" 00:05:09.215 }, 00:05:09.215 "blobfs": { 00:05:09.215 "mask": "0x80", 00:05:09.215 "tpoint_mask": "0x0" 00:05:09.215 }, 00:05:09.215 "dsa": { 00:05:09.215 "mask": "0x200", 00:05:09.215 "tpoint_mask": "0x0" 00:05:09.215 }, 00:05:09.215 "thread": { 00:05:09.215 "mask": "0x400", 00:05:09.215 "tpoint_mask": "0x0" 00:05:09.215 }, 00:05:09.215 "nvme_pcie": { 00:05:09.215 "mask": "0x800", 00:05:09.215 "tpoint_mask": "0x0" 00:05:09.215 }, 00:05:09.215 "iaa": { 00:05:09.215 "mask": "0x1000", 00:05:09.215 "tpoint_mask": "0x0" 00:05:09.215 }, 00:05:09.215 "nvme_tcp": { 00:05:09.215 "mask": "0x2000", 00:05:09.215 "tpoint_mask": "0x0" 00:05:09.215 }, 00:05:09.215 "bdev_nvme": { 00:05:09.215 "mask": "0x4000", 00:05:09.215 "tpoint_mask": "0x0" 00:05:09.215 } 00:05:09.215 }' 00:05:09.215 05:45:30 -- rpc/rpc.sh@43 -- # jq length 00:05:09.215 05:45:30 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:09.215 05:45:30 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:09.474 05:45:30 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:09.474 05:45:30 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:09.474 05:45:30 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:09.474 05:45:30 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:09.474 05:45:30 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:09.474 05:45:30 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:09.474 05:45:31 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:09.474 00:05:09.474 real 0m0.279s 00:05:09.474 user 0m0.237s 00:05:09.474 sys 0m0.032s 00:05:09.474 05:45:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:09.474 ************************************ 00:05:09.474 END TEST rpc_trace_cmd_test 00:05:09.474 05:45:31 -- common/autotest_common.sh@10 -- # set +x 00:05:09.474 ************************************ 00:05:09.474 05:45:31 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:09.474 05:45:31 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:09.474 05:45:31 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:09.474 05:45:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.474 05:45:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.474 05:45:31 -- common/autotest_common.sh@10 -- # set +x 00:05:09.474 ************************************ 00:05:09.474 START TEST rpc_daemon_integrity 00:05:09.474 ************************************ 00:05:09.474 05:45:31 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:09.474 05:45:31 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:09.474 05:45:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.474 05:45:31 -- common/autotest_common.sh@10 -- # set +x 00:05:09.474 05:45:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.474 05:45:31 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:09.474 05:45:31 -- rpc/rpc.sh@13 -- # jq length 00:05:09.733 05:45:31 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:09.733 05:45:31 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:09.733 05:45:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.733 05:45:31 -- common/autotest_common.sh@10 -- # set +x 00:05:09.733 05:45:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.733 05:45:31 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:09.733 05:45:31 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:09.733 05:45:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.733 05:45:31 -- common/autotest_common.sh@10 -- # set +x 00:05:09.733 05:45:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.733 05:45:31 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:09.733 { 00:05:09.733 "name": "Malloc2", 00:05:09.733 "aliases": [ 00:05:09.733 "55d0ae04-1fd3-46a9-9357-c68fe99e05b1" 00:05:09.733 ], 00:05:09.733 "product_name": "Malloc disk", 00:05:09.733 "block_size": 512, 00:05:09.733 "num_blocks": 16384, 00:05:09.733 "uuid": "55d0ae04-1fd3-46a9-9357-c68fe99e05b1", 00:05:09.733 "assigned_rate_limits": { 00:05:09.733 "rw_ios_per_sec": 0, 00:05:09.733 "rw_mbytes_per_sec": 0, 00:05:09.733 "r_mbytes_per_sec": 0, 00:05:09.733 "w_mbytes_per_sec": 0 00:05:09.733 }, 00:05:09.733 "claimed": false, 00:05:09.733 "zoned": false, 00:05:09.733 "supported_io_types": { 00:05:09.733 "read": true, 00:05:09.733 "write": true, 00:05:09.733 "unmap": true, 00:05:09.733 "write_zeroes": true, 00:05:09.733 "flush": true, 00:05:09.733 "reset": true, 00:05:09.733 "compare": false, 00:05:09.733 "compare_and_write": false, 00:05:09.733 "abort": true, 00:05:09.733 "nvme_admin": false, 00:05:09.733 "nvme_io": false 00:05:09.733 }, 00:05:09.733 "memory_domains": [ 00:05:09.733 { 00:05:09.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.733 "dma_device_type": 2 00:05:09.733 } 00:05:09.733 ], 00:05:09.733 "driver_specific": {} 00:05:09.733 } 00:05:09.733 ]' 00:05:09.733 05:45:31 -- rpc/rpc.sh@17 -- # jq length 00:05:09.733 05:45:31 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:09.733 05:45:31 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:09.733 05:45:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.733 05:45:31 -- common/autotest_common.sh@10 -- # set +x 00:05:09.733 [2024-12-15 05:45:31.234595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:09.733 [2024-12-15 05:45:31.234662] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:09.733 [2024-12-15 05:45:31.234677] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ec1fe0 00:05:09.733 [2024-12-15 05:45:31.234684] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:09.733 [2024-12-15 05:45:31.236056] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:09.733 [2024-12-15 05:45:31.236092] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:09.733 Passthru0 00:05:09.733 05:45:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.733 05:45:31 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:09.733 05:45:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.733 05:45:31 -- common/autotest_common.sh@10 -- # set +x 00:05:09.733 05:45:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.733 05:45:31 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:09.733 { 00:05:09.733 "name": "Malloc2", 00:05:09.733 "aliases": [ 00:05:09.733 "55d0ae04-1fd3-46a9-9357-c68fe99e05b1" 00:05:09.733 ], 00:05:09.733 "product_name": "Malloc disk", 00:05:09.733 "block_size": 512, 00:05:09.733 "num_blocks": 16384, 00:05:09.733 "uuid": "55d0ae04-1fd3-46a9-9357-c68fe99e05b1", 00:05:09.733 "assigned_rate_limits": { 00:05:09.733 "rw_ios_per_sec": 0, 00:05:09.733 "rw_mbytes_per_sec": 0, 00:05:09.733 "r_mbytes_per_sec": 0, 00:05:09.733 "w_mbytes_per_sec": 0 00:05:09.733 }, 00:05:09.733 "claimed": true, 00:05:09.733 "claim_type": "exclusive_write", 00:05:09.733 "zoned": false, 00:05:09.733 "supported_io_types": { 00:05:09.733 "read": true, 00:05:09.733 "write": true, 00:05:09.733 "unmap": true, 00:05:09.733 "write_zeroes": true, 00:05:09.733 "flush": true, 00:05:09.733 "reset": true, 00:05:09.733 "compare": false, 00:05:09.733 "compare_and_write": false, 00:05:09.733 "abort": true, 00:05:09.733 "nvme_admin": false, 00:05:09.733 "nvme_io": false 00:05:09.733 }, 00:05:09.733 "memory_domains": [ 00:05:09.733 { 00:05:09.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.733 "dma_device_type": 2 00:05:09.733 } 00:05:09.733 ], 00:05:09.733 "driver_specific": {} 00:05:09.733 }, 00:05:09.733 { 00:05:09.733 "name": "Passthru0", 00:05:09.733 "aliases": [ 00:05:09.733 "d3c63f53-c17b-5d4b-bb2f-b9d070437f57" 00:05:09.733 ], 00:05:09.733 "product_name": "passthru", 00:05:09.733 "block_size": 512, 00:05:09.733 "num_blocks": 16384, 00:05:09.733 "uuid": "d3c63f53-c17b-5d4b-bb2f-b9d070437f57", 00:05:09.733 "assigned_rate_limits": { 00:05:09.733 "rw_ios_per_sec": 0, 00:05:09.733 "rw_mbytes_per_sec": 0, 00:05:09.733 "r_mbytes_per_sec": 0, 00:05:09.733 "w_mbytes_per_sec": 0 00:05:09.733 }, 00:05:09.733 "claimed": false, 00:05:09.733 "zoned": false, 00:05:09.733 "supported_io_types": { 00:05:09.733 "read": true, 00:05:09.733 "write": true, 00:05:09.733 "unmap": true, 00:05:09.733 "write_zeroes": true, 00:05:09.733 "flush": true, 00:05:09.733 "reset": true, 00:05:09.733 "compare": false, 00:05:09.733 "compare_and_write": false, 00:05:09.734 "abort": true, 00:05:09.734 "nvme_admin": false, 00:05:09.734 "nvme_io": false 00:05:09.734 }, 00:05:09.734 "memory_domains": [ 00:05:09.734 { 00:05:09.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.734 "dma_device_type": 2 00:05:09.734 } 00:05:09.734 ], 00:05:09.734 "driver_specific": { 00:05:09.734 "passthru": { 00:05:09.734 "name": "Passthru0", 00:05:09.734 "base_bdev_name": "Malloc2" 00:05:09.734 } 00:05:09.734 } 00:05:09.734 } 00:05:09.734 ]' 00:05:09.734 05:45:31 -- rpc/rpc.sh@21 -- # jq length 00:05:09.734 05:45:31 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:09.734 05:45:31 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:09.734 05:45:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.734 05:45:31 -- common/autotest_common.sh@10 -- # set +x 00:05:09.734 05:45:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.734 05:45:31 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:09.734 05:45:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.734 05:45:31 -- common/autotest_common.sh@10 -- # set +x 00:05:09.734 05:45:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.734 05:45:31 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:09.734 05:45:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.734 05:45:31 -- common/autotest_common.sh@10 -- # set +x 00:05:09.734 05:45:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.734 05:45:31 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:09.734 05:45:31 -- rpc/rpc.sh@26 -- # jq length 00:05:09.993 05:45:31 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:09.993 00:05:09.993 real 0m0.312s 00:05:09.993 user 0m0.213s 00:05:09.993 sys 0m0.035s 00:05:09.993 05:45:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:09.993 ************************************ 00:05:09.993 END TEST rpc_daemon_integrity 00:05:09.993 ************************************ 00:05:09.993 05:45:31 -- common/autotest_common.sh@10 -- # set +x 00:05:09.993 05:45:31 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:09.993 05:45:31 -- rpc/rpc.sh@84 -- # killprocess 65570 00:05:09.993 05:45:31 -- common/autotest_common.sh@936 -- # '[' -z 65570 ']' 00:05:09.993 05:45:31 -- common/autotest_common.sh@940 -- # kill -0 65570 00:05:09.993 05:45:31 -- common/autotest_common.sh@941 -- # uname 00:05:09.993 05:45:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:09.993 05:45:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65570 00:05:09.993 05:45:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:09.993 05:45:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:09.993 killing process with pid 65570 00:05:09.993 05:45:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65570' 00:05:09.993 05:45:31 -- common/autotest_common.sh@955 -- # kill 65570 00:05:09.993 05:45:31 -- common/autotest_common.sh@960 -- # wait 65570 00:05:10.252 00:05:10.252 real 0m2.772s 00:05:10.252 user 0m3.744s 00:05:10.252 sys 0m0.575s 00:05:10.252 05:45:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.252 05:45:31 -- common/autotest_common.sh@10 -- # set +x 00:05:10.252 ************************************ 00:05:10.252 END TEST rpc 00:05:10.252 ************************************ 00:05:10.252 05:45:31 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:10.252 05:45:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.252 05:45:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.252 05:45:31 -- common/autotest_common.sh@10 -- # set +x 00:05:10.252 ************************************ 00:05:10.252 START TEST rpc_client 00:05:10.252 ************************************ 00:05:10.252 05:45:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:10.252 * Looking for test storage... 00:05:10.252 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:10.252 05:45:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:10.252 05:45:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:10.252 05:45:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:10.511 05:45:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:10.511 05:45:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:10.511 05:45:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:10.511 05:45:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:10.511 05:45:31 -- scripts/common.sh@335 -- # IFS=.-: 00:05:10.511 05:45:31 -- scripts/common.sh@335 -- # read -ra ver1 00:05:10.511 05:45:31 -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.511 05:45:31 -- scripts/common.sh@336 -- # read -ra ver2 00:05:10.511 05:45:31 -- scripts/common.sh@337 -- # local 'op=<' 00:05:10.511 05:45:31 -- scripts/common.sh@339 -- # ver1_l=2 00:05:10.511 05:45:31 -- scripts/common.sh@340 -- # ver2_l=1 00:05:10.511 05:45:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:10.511 05:45:31 -- scripts/common.sh@343 -- # case "$op" in 00:05:10.511 05:45:31 -- scripts/common.sh@344 -- # : 1 00:05:10.511 05:45:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:10.511 05:45:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.511 05:45:31 -- scripts/common.sh@364 -- # decimal 1 00:05:10.511 05:45:31 -- scripts/common.sh@352 -- # local d=1 00:05:10.511 05:45:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.511 05:45:31 -- scripts/common.sh@354 -- # echo 1 00:05:10.511 05:45:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:10.511 05:45:31 -- scripts/common.sh@365 -- # decimal 2 00:05:10.511 05:45:31 -- scripts/common.sh@352 -- # local d=2 00:05:10.511 05:45:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.511 05:45:31 -- scripts/common.sh@354 -- # echo 2 00:05:10.511 05:45:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:10.511 05:45:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:10.511 05:45:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:10.511 05:45:31 -- scripts/common.sh@367 -- # return 0 00:05:10.511 05:45:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.511 05:45:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:10.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.511 --rc genhtml_branch_coverage=1 00:05:10.511 --rc genhtml_function_coverage=1 00:05:10.511 --rc genhtml_legend=1 00:05:10.511 --rc geninfo_all_blocks=1 00:05:10.511 --rc geninfo_unexecuted_blocks=1 00:05:10.511 00:05:10.511 ' 00:05:10.511 05:45:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:10.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.511 --rc genhtml_branch_coverage=1 00:05:10.511 --rc genhtml_function_coverage=1 00:05:10.511 --rc genhtml_legend=1 00:05:10.511 --rc geninfo_all_blocks=1 00:05:10.511 --rc geninfo_unexecuted_blocks=1 00:05:10.511 00:05:10.511 ' 00:05:10.511 05:45:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:10.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.511 --rc genhtml_branch_coverage=1 00:05:10.511 --rc genhtml_function_coverage=1 00:05:10.511 --rc genhtml_legend=1 00:05:10.511 --rc geninfo_all_blocks=1 00:05:10.511 --rc geninfo_unexecuted_blocks=1 00:05:10.511 00:05:10.511 ' 00:05:10.511 05:45:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:10.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.512 --rc genhtml_branch_coverage=1 00:05:10.512 --rc genhtml_function_coverage=1 00:05:10.512 --rc genhtml_legend=1 00:05:10.512 --rc geninfo_all_blocks=1 00:05:10.512 --rc geninfo_unexecuted_blocks=1 00:05:10.512 00:05:10.512 ' 00:05:10.512 05:45:31 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:10.512 OK 00:05:10.512 05:45:31 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:10.512 00:05:10.512 real 0m0.196s 00:05:10.512 user 0m0.135s 00:05:10.512 sys 0m0.073s 00:05:10.512 05:45:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.512 05:45:31 -- common/autotest_common.sh@10 -- # set +x 00:05:10.512 ************************************ 00:05:10.512 END TEST rpc_client 00:05:10.512 ************************************ 00:05:10.512 05:45:31 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:10.512 05:45:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.512 05:45:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.512 05:45:31 -- common/autotest_common.sh@10 -- # set +x 00:05:10.512 ************************************ 00:05:10.512 START TEST json_config 00:05:10.512 ************************************ 00:05:10.512 05:45:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:10.512 05:45:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:10.512 05:45:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:10.512 05:45:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:10.512 05:45:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:10.512 05:45:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:10.512 05:45:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:10.512 05:45:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:10.512 05:45:32 -- scripts/common.sh@335 -- # IFS=.-: 00:05:10.512 05:45:32 -- scripts/common.sh@335 -- # read -ra ver1 00:05:10.512 05:45:32 -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.512 05:45:32 -- scripts/common.sh@336 -- # read -ra ver2 00:05:10.512 05:45:32 -- scripts/common.sh@337 -- # local 'op=<' 00:05:10.512 05:45:32 -- scripts/common.sh@339 -- # ver1_l=2 00:05:10.512 05:45:32 -- scripts/common.sh@340 -- # ver2_l=1 00:05:10.512 05:45:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:10.512 05:45:32 -- scripts/common.sh@343 -- # case "$op" in 00:05:10.512 05:45:32 -- scripts/common.sh@344 -- # : 1 00:05:10.512 05:45:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:10.512 05:45:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.512 05:45:32 -- scripts/common.sh@364 -- # decimal 1 00:05:10.512 05:45:32 -- scripts/common.sh@352 -- # local d=1 00:05:10.512 05:45:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.512 05:45:32 -- scripts/common.sh@354 -- # echo 1 00:05:10.512 05:45:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:10.512 05:45:32 -- scripts/common.sh@365 -- # decimal 2 00:05:10.512 05:45:32 -- scripts/common.sh@352 -- # local d=2 00:05:10.512 05:45:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.512 05:45:32 -- scripts/common.sh@354 -- # echo 2 00:05:10.512 05:45:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:10.512 05:45:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:10.512 05:45:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:10.512 05:45:32 -- scripts/common.sh@367 -- # return 0 00:05:10.512 05:45:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.512 05:45:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:10.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.512 --rc genhtml_branch_coverage=1 00:05:10.512 --rc genhtml_function_coverage=1 00:05:10.512 --rc genhtml_legend=1 00:05:10.512 --rc geninfo_all_blocks=1 00:05:10.512 --rc geninfo_unexecuted_blocks=1 00:05:10.512 00:05:10.512 ' 00:05:10.512 05:45:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:10.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.512 --rc genhtml_branch_coverage=1 00:05:10.512 --rc genhtml_function_coverage=1 00:05:10.512 --rc genhtml_legend=1 00:05:10.512 --rc geninfo_all_blocks=1 00:05:10.512 --rc geninfo_unexecuted_blocks=1 00:05:10.512 00:05:10.512 ' 00:05:10.512 05:45:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:10.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.512 --rc genhtml_branch_coverage=1 00:05:10.512 --rc genhtml_function_coverage=1 00:05:10.512 --rc genhtml_legend=1 00:05:10.512 --rc geninfo_all_blocks=1 00:05:10.512 --rc geninfo_unexecuted_blocks=1 00:05:10.512 00:05:10.512 ' 00:05:10.512 05:45:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:10.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.512 --rc genhtml_branch_coverage=1 00:05:10.512 --rc genhtml_function_coverage=1 00:05:10.512 --rc genhtml_legend=1 00:05:10.512 --rc geninfo_all_blocks=1 00:05:10.512 --rc geninfo_unexecuted_blocks=1 00:05:10.512 00:05:10.512 ' 00:05:10.512 05:45:32 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:10.512 05:45:32 -- nvmf/common.sh@7 -- # uname -s 00:05:10.512 05:45:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:10.512 05:45:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:10.512 05:45:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:10.512 05:45:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:10.512 05:45:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:10.512 05:45:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:10.512 05:45:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:10.512 05:45:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:10.512 05:45:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:10.512 05:45:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:10.771 05:45:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:05:10.771 05:45:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:05:10.771 05:45:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:10.771 05:45:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:10.771 05:45:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:10.771 05:45:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:10.771 05:45:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:10.771 05:45:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:10.771 05:45:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:10.771 05:45:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.771 05:45:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.772 05:45:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.772 05:45:32 -- paths/export.sh@5 -- # export PATH 00:05:10.772 05:45:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.772 05:45:32 -- nvmf/common.sh@46 -- # : 0 00:05:10.772 05:45:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:10.772 05:45:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:10.772 05:45:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:10.772 05:45:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:10.772 05:45:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:10.772 05:45:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:10.772 05:45:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:10.772 05:45:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:10.772 05:45:32 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:10.772 05:45:32 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:10.772 05:45:32 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:10.772 05:45:32 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:10.772 05:45:32 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:10.772 05:45:32 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:10.772 05:45:32 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:10.772 05:45:32 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:10.772 05:45:32 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:10.772 05:45:32 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:10.772 05:45:32 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:10.772 05:45:32 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:10.772 05:45:32 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:10.772 05:45:32 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:10.772 INFO: JSON configuration test init 00:05:10.772 05:45:32 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:10.772 05:45:32 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:10.772 05:45:32 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:10.772 05:45:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:10.772 05:45:32 -- common/autotest_common.sh@10 -- # set +x 00:05:10.772 05:45:32 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:10.772 05:45:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:10.772 05:45:32 -- common/autotest_common.sh@10 -- # set +x 00:05:10.772 05:45:32 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:10.772 05:45:32 -- json_config/json_config.sh@98 -- # local app=target 00:05:10.772 05:45:32 -- json_config/json_config.sh@99 -- # shift 00:05:10.772 05:45:32 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:10.772 05:45:32 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:10.772 05:45:32 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:10.772 05:45:32 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:10.772 05:45:32 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:10.772 05:45:32 -- json_config/json_config.sh@111 -- # app_pid[$app]=65823 00:05:10.772 05:45:32 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:10.772 Waiting for target to run... 00:05:10.772 05:45:32 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:10.772 05:45:32 -- json_config/json_config.sh@114 -- # waitforlisten 65823 /var/tmp/spdk_tgt.sock 00:05:10.772 05:45:32 -- common/autotest_common.sh@829 -- # '[' -z 65823 ']' 00:05:10.772 05:45:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:10.772 05:45:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:10.772 05:45:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:10.772 05:45:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.772 05:45:32 -- common/autotest_common.sh@10 -- # set +x 00:05:10.772 [2024-12-15 05:45:32.264847] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:10.772 [2024-12-15 05:45:32.265024] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65823 ] 00:05:11.031 [2024-12-15 05:45:32.595784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.031 [2024-12-15 05:45:32.614163] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:11.031 [2024-12-15 05:45:32.614351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.968 05:45:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.968 05:45:33 -- common/autotest_common.sh@862 -- # return 0 00:05:11.968 00:05:11.968 05:45:33 -- json_config/json_config.sh@115 -- # echo '' 00:05:11.968 05:45:33 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:11.968 05:45:33 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:11.968 05:45:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:11.968 05:45:33 -- common/autotest_common.sh@10 -- # set +x 00:05:11.968 05:45:33 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:11.968 05:45:33 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:11.968 05:45:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:11.968 05:45:33 -- common/autotest_common.sh@10 -- # set +x 00:05:11.968 05:45:33 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:11.968 05:45:33 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:11.968 05:45:33 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:12.226 05:45:33 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:12.226 05:45:33 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:12.226 05:45:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:12.226 05:45:33 -- common/autotest_common.sh@10 -- # set +x 00:05:12.227 05:45:33 -- json_config/json_config.sh@48 -- # local ret=0 00:05:12.227 05:45:33 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:12.227 05:45:33 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:12.227 05:45:33 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:12.227 05:45:33 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:12.227 05:45:33 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:12.486 05:45:34 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:12.486 05:45:34 -- json_config/json_config.sh@51 -- # local get_types 00:05:12.486 05:45:34 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:12.486 05:45:34 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:12.486 05:45:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:12.486 05:45:34 -- common/autotest_common.sh@10 -- # set +x 00:05:12.486 05:45:34 -- json_config/json_config.sh@58 -- # return 0 00:05:12.486 05:45:34 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:12.486 05:45:34 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:12.486 05:45:34 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:12.486 05:45:34 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:12.486 05:45:34 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:12.486 05:45:34 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:12.486 05:45:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:12.486 05:45:34 -- common/autotest_common.sh@10 -- # set +x 00:05:12.486 05:45:34 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:12.486 05:45:34 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:12.486 05:45:34 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:12.486 05:45:34 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:12.486 05:45:34 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:12.745 MallocForNvmf0 00:05:12.745 05:45:34 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:12.745 05:45:34 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:13.004 MallocForNvmf1 00:05:13.263 05:45:34 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:13.263 05:45:34 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:13.263 [2024-12-15 05:45:34.853316] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:13.263 05:45:34 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:13.263 05:45:34 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:13.522 05:45:35 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:13.522 05:45:35 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:13.780 05:45:35 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:13.780 05:45:35 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:14.039 05:45:35 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:14.039 05:45:35 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:14.298 [2024-12-15 05:45:35.741735] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:14.298 05:45:35 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:14.298 05:45:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:14.298 05:45:35 -- common/autotest_common.sh@10 -- # set +x 00:05:14.298 05:45:35 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:14.298 05:45:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:14.298 05:45:35 -- common/autotest_common.sh@10 -- # set +x 00:05:14.298 05:45:35 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:14.298 05:45:35 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:14.298 05:45:35 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:14.557 MallocBdevForConfigChangeCheck 00:05:14.557 05:45:36 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:14.557 05:45:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:14.557 05:45:36 -- common/autotest_common.sh@10 -- # set +x 00:05:14.557 05:45:36 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:14.557 05:45:36 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:15.125 INFO: shutting down applications... 00:05:15.125 05:45:36 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:15.125 05:45:36 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:15.125 05:45:36 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:15.125 05:45:36 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:15.125 05:45:36 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:15.384 Calling clear_iscsi_subsystem 00:05:15.384 Calling clear_nvmf_subsystem 00:05:15.384 Calling clear_nbd_subsystem 00:05:15.384 Calling clear_ublk_subsystem 00:05:15.384 Calling clear_vhost_blk_subsystem 00:05:15.384 Calling clear_vhost_scsi_subsystem 00:05:15.384 Calling clear_scheduler_subsystem 00:05:15.384 Calling clear_bdev_subsystem 00:05:15.384 Calling clear_accel_subsystem 00:05:15.384 Calling clear_vmd_subsystem 00:05:15.384 Calling clear_sock_subsystem 00:05:15.384 Calling clear_iobuf_subsystem 00:05:15.384 05:45:36 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:15.384 05:45:36 -- json_config/json_config.sh@396 -- # count=100 00:05:15.384 05:45:36 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:15.384 05:45:36 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:15.384 05:45:36 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:15.384 05:45:36 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:15.643 05:45:37 -- json_config/json_config.sh@398 -- # break 00:05:15.643 05:45:37 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:15.643 05:45:37 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:15.643 05:45:37 -- json_config/json_config.sh@120 -- # local app=target 00:05:15.643 05:45:37 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:15.643 05:45:37 -- json_config/json_config.sh@124 -- # [[ -n 65823 ]] 00:05:15.643 05:45:37 -- json_config/json_config.sh@127 -- # kill -SIGINT 65823 00:05:15.643 05:45:37 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:15.643 05:45:37 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:15.643 05:45:37 -- json_config/json_config.sh@130 -- # kill -0 65823 00:05:15.643 05:45:37 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:16.211 05:45:37 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:16.211 05:45:37 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:16.211 05:45:37 -- json_config/json_config.sh@130 -- # kill -0 65823 00:05:16.211 05:45:37 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:16.211 05:45:37 -- json_config/json_config.sh@132 -- # break 00:05:16.211 05:45:37 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:16.211 SPDK target shutdown done 00:05:16.211 05:45:37 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:16.211 INFO: relaunching applications... 00:05:16.211 05:45:37 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:16.211 05:45:37 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:16.211 05:45:37 -- json_config/json_config.sh@98 -- # local app=target 00:05:16.211 05:45:37 -- json_config/json_config.sh@99 -- # shift 00:05:16.211 05:45:37 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:16.211 05:45:37 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:16.211 05:45:37 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:16.211 05:45:37 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:16.211 05:45:37 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:16.211 05:45:37 -- json_config/json_config.sh@111 -- # app_pid[$app]=66019 00:05:16.211 Waiting for target to run... 00:05:16.211 05:45:37 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:16.211 05:45:37 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:16.211 05:45:37 -- json_config/json_config.sh@114 -- # waitforlisten 66019 /var/tmp/spdk_tgt.sock 00:05:16.211 05:45:37 -- common/autotest_common.sh@829 -- # '[' -z 66019 ']' 00:05:16.211 05:45:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:16.211 05:45:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:16.211 05:45:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:16.211 05:45:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.211 05:45:37 -- common/autotest_common.sh@10 -- # set +x 00:05:16.211 [2024-12-15 05:45:37.783403] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:16.211 [2024-12-15 05:45:37.783520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66019 ] 00:05:16.471 [2024-12-15 05:45:38.080422] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.471 [2024-12-15 05:45:38.099315] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:16.471 [2024-12-15 05:45:38.099538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.039 [2024-12-15 05:45:38.390885] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:17.039 [2024-12-15 05:45:38.422971] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:17.298 05:45:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.298 05:45:38 -- common/autotest_common.sh@862 -- # return 0 00:05:17.298 00:05:17.298 05:45:38 -- json_config/json_config.sh@115 -- # echo '' 00:05:17.298 05:45:38 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:17.298 INFO: Checking if target configuration is the same... 00:05:17.298 05:45:38 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:17.298 05:45:38 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:17.298 05:45:38 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:17.298 05:45:38 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:17.298 + '[' 2 -ne 2 ']' 00:05:17.298 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:17.298 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:17.298 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:17.298 +++ basename /dev/fd/62 00:05:17.298 ++ mktemp /tmp/62.XXX 00:05:17.298 + tmp_file_1=/tmp/62.P1s 00:05:17.298 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:17.298 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:17.298 + tmp_file_2=/tmp/spdk_tgt_config.json.NWD 00:05:17.298 + ret=0 00:05:17.298 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:17.557 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:17.557 + diff -u /tmp/62.P1s /tmp/spdk_tgt_config.json.NWD 00:05:17.557 INFO: JSON config files are the same 00:05:17.557 + echo 'INFO: JSON config files are the same' 00:05:17.557 + rm /tmp/62.P1s /tmp/spdk_tgt_config.json.NWD 00:05:17.557 + exit 0 00:05:17.557 05:45:39 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:17.557 INFO: changing configuration and checking if this can be detected... 00:05:17.557 05:45:39 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:17.557 05:45:39 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:17.557 05:45:39 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:17.817 05:45:39 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:17.817 05:45:39 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:17.817 05:45:39 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:17.817 + '[' 2 -ne 2 ']' 00:05:17.817 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:17.817 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:17.817 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:17.817 +++ basename /dev/fd/62 00:05:17.817 ++ mktemp /tmp/62.XXX 00:05:17.817 + tmp_file_1=/tmp/62.40D 00:05:17.817 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:17.817 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:17.817 + tmp_file_2=/tmp/spdk_tgt_config.json.UKw 00:05:17.817 + ret=0 00:05:17.817 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:18.384 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:18.384 + diff -u /tmp/62.40D /tmp/spdk_tgt_config.json.UKw 00:05:18.384 + ret=1 00:05:18.384 + echo '=== Start of file: /tmp/62.40D ===' 00:05:18.384 + cat /tmp/62.40D 00:05:18.384 + echo '=== End of file: /tmp/62.40D ===' 00:05:18.384 + echo '' 00:05:18.384 + echo '=== Start of file: /tmp/spdk_tgt_config.json.UKw ===' 00:05:18.384 + cat /tmp/spdk_tgt_config.json.UKw 00:05:18.384 + echo '=== End of file: /tmp/spdk_tgt_config.json.UKw ===' 00:05:18.384 + echo '' 00:05:18.384 + rm /tmp/62.40D /tmp/spdk_tgt_config.json.UKw 00:05:18.385 + exit 1 00:05:18.385 INFO: configuration change detected. 00:05:18.385 05:45:39 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:18.385 05:45:39 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:18.385 05:45:39 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:18.385 05:45:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:18.385 05:45:39 -- common/autotest_common.sh@10 -- # set +x 00:05:18.385 05:45:39 -- json_config/json_config.sh@360 -- # local ret=0 00:05:18.385 05:45:39 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:18.385 05:45:39 -- json_config/json_config.sh@370 -- # [[ -n 66019 ]] 00:05:18.385 05:45:39 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:18.385 05:45:39 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:18.385 05:45:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:18.385 05:45:39 -- common/autotest_common.sh@10 -- # set +x 00:05:18.385 05:45:39 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:18.385 05:45:39 -- json_config/json_config.sh@246 -- # uname -s 00:05:18.385 05:45:39 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:18.385 05:45:39 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:18.385 05:45:39 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:18.385 05:45:39 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:18.385 05:45:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:18.385 05:45:39 -- common/autotest_common.sh@10 -- # set +x 00:05:18.385 05:45:39 -- json_config/json_config.sh@376 -- # killprocess 66019 00:05:18.385 05:45:39 -- common/autotest_common.sh@936 -- # '[' -z 66019 ']' 00:05:18.385 05:45:39 -- common/autotest_common.sh@940 -- # kill -0 66019 00:05:18.385 05:45:39 -- common/autotest_common.sh@941 -- # uname 00:05:18.385 05:45:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:18.385 05:45:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66019 00:05:18.385 05:45:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:18.385 05:45:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:18.385 05:45:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66019' 00:05:18.385 killing process with pid 66019 00:05:18.385 05:45:39 -- common/autotest_common.sh@955 -- # kill 66019 00:05:18.385 05:45:39 -- common/autotest_common.sh@960 -- # wait 66019 00:05:18.644 05:45:40 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:18.644 05:45:40 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:18.644 05:45:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:18.644 05:45:40 -- common/autotest_common.sh@10 -- # set +x 00:05:18.644 05:45:40 -- json_config/json_config.sh@381 -- # return 0 00:05:18.644 INFO: Success 00:05:18.644 05:45:40 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:18.644 00:05:18.644 real 0m8.092s 00:05:18.644 user 0m11.701s 00:05:18.644 sys 0m1.412s 00:05:18.644 05:45:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:18.644 05:45:40 -- common/autotest_common.sh@10 -- # set +x 00:05:18.644 ************************************ 00:05:18.644 END TEST json_config 00:05:18.644 ************************************ 00:05:18.644 05:45:40 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:18.644 05:45:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.644 05:45:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.644 05:45:40 -- common/autotest_common.sh@10 -- # set +x 00:05:18.644 ************************************ 00:05:18.644 START TEST json_config_extra_key 00:05:18.644 ************************************ 00:05:18.644 05:45:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:18.644 05:45:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:18.644 05:45:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:18.644 05:45:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:18.644 05:45:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:18.644 05:45:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:18.644 05:45:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:18.644 05:45:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:18.644 05:45:40 -- scripts/common.sh@335 -- # IFS=.-: 00:05:18.644 05:45:40 -- scripts/common.sh@335 -- # read -ra ver1 00:05:18.644 05:45:40 -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.644 05:45:40 -- scripts/common.sh@336 -- # read -ra ver2 00:05:18.644 05:45:40 -- scripts/common.sh@337 -- # local 'op=<' 00:05:18.644 05:45:40 -- scripts/common.sh@339 -- # ver1_l=2 00:05:18.644 05:45:40 -- scripts/common.sh@340 -- # ver2_l=1 00:05:18.644 05:45:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:18.644 05:45:40 -- scripts/common.sh@343 -- # case "$op" in 00:05:18.644 05:45:40 -- scripts/common.sh@344 -- # : 1 00:05:18.644 05:45:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:18.644 05:45:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.644 05:45:40 -- scripts/common.sh@364 -- # decimal 1 00:05:18.644 05:45:40 -- scripts/common.sh@352 -- # local d=1 00:05:18.644 05:45:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.644 05:45:40 -- scripts/common.sh@354 -- # echo 1 00:05:18.644 05:45:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:18.644 05:45:40 -- scripts/common.sh@365 -- # decimal 2 00:05:18.644 05:45:40 -- scripts/common.sh@352 -- # local d=2 00:05:18.644 05:45:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.644 05:45:40 -- scripts/common.sh@354 -- # echo 2 00:05:18.644 05:45:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:18.644 05:45:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:18.644 05:45:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:18.644 05:45:40 -- scripts/common.sh@367 -- # return 0 00:05:18.644 05:45:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.644 05:45:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:18.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.644 --rc genhtml_branch_coverage=1 00:05:18.644 --rc genhtml_function_coverage=1 00:05:18.644 --rc genhtml_legend=1 00:05:18.644 --rc geninfo_all_blocks=1 00:05:18.644 --rc geninfo_unexecuted_blocks=1 00:05:18.644 00:05:18.644 ' 00:05:18.644 05:45:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:18.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.644 --rc genhtml_branch_coverage=1 00:05:18.644 --rc genhtml_function_coverage=1 00:05:18.644 --rc genhtml_legend=1 00:05:18.644 --rc geninfo_all_blocks=1 00:05:18.644 --rc geninfo_unexecuted_blocks=1 00:05:18.644 00:05:18.644 ' 00:05:18.644 05:45:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:18.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.644 --rc genhtml_branch_coverage=1 00:05:18.644 --rc genhtml_function_coverage=1 00:05:18.644 --rc genhtml_legend=1 00:05:18.644 --rc geninfo_all_blocks=1 00:05:18.644 --rc geninfo_unexecuted_blocks=1 00:05:18.644 00:05:18.644 ' 00:05:18.644 05:45:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:18.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.644 --rc genhtml_branch_coverage=1 00:05:18.644 --rc genhtml_function_coverage=1 00:05:18.644 --rc genhtml_legend=1 00:05:18.644 --rc geninfo_all_blocks=1 00:05:18.644 --rc geninfo_unexecuted_blocks=1 00:05:18.644 00:05:18.644 ' 00:05:18.644 05:45:40 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:18.644 05:45:40 -- nvmf/common.sh@7 -- # uname -s 00:05:18.644 05:45:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:18.644 05:45:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:18.644 05:45:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:18.644 05:45:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:18.644 05:45:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:18.644 05:45:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:18.644 05:45:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:18.644 05:45:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:18.644 05:45:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:18.644 05:45:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:18.644 05:45:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:05:18.644 05:45:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:05:18.644 05:45:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:18.644 05:45:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:18.644 05:45:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:18.644 05:45:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:18.644 05:45:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:18.644 05:45:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:18.644 05:45:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:18.645 05:45:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.904 05:45:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.904 05:45:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.904 05:45:40 -- paths/export.sh@5 -- # export PATH 00:05:18.904 05:45:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.904 05:45:40 -- nvmf/common.sh@46 -- # : 0 00:05:18.904 05:45:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:18.904 05:45:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:18.904 05:45:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:18.904 05:45:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:18.904 05:45:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:18.904 05:45:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:18.904 05:45:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:18.904 05:45:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:18.904 05:45:40 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:18.904 05:45:40 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:18.904 05:45:40 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:18.904 05:45:40 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:18.904 05:45:40 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:18.904 05:45:40 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:18.904 05:45:40 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:18.904 05:45:40 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:18.904 05:45:40 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:18.904 INFO: launching applications... 00:05:18.904 05:45:40 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:18.904 05:45:40 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:18.904 05:45:40 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:18.904 05:45:40 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:18.904 05:45:40 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:18.904 05:45:40 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:18.904 05:45:40 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=66161 00:05:18.904 Waiting for target to run... 00:05:18.904 05:45:40 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:18.904 05:45:40 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:18.904 05:45:40 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 66161 /var/tmp/spdk_tgt.sock 00:05:18.904 05:45:40 -- common/autotest_common.sh@829 -- # '[' -z 66161 ']' 00:05:18.904 05:45:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:18.904 05:45:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.904 05:45:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:18.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:18.904 05:45:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.904 05:45:40 -- common/autotest_common.sh@10 -- # set +x 00:05:18.904 [2024-12-15 05:45:40.340830] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:18.904 [2024-12-15 05:45:40.340994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66161 ] 00:05:19.163 [2024-12-15 05:45:40.635947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.163 [2024-12-15 05:45:40.658763] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:19.163 [2024-12-15 05:45:40.658955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.100 05:45:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.100 05:45:41 -- common/autotest_common.sh@862 -- # return 0 00:05:20.100 00:05:20.100 05:45:41 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:20.100 INFO: shutting down applications... 00:05:20.100 05:45:41 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:20.100 05:45:41 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:20.100 05:45:41 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:20.100 05:45:41 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:20.100 05:45:41 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 66161 ]] 00:05:20.100 05:45:41 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 66161 00:05:20.100 05:45:41 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:20.100 05:45:41 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:20.100 05:45:41 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66161 00:05:20.100 05:45:41 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:20.359 05:45:41 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:20.359 05:45:41 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:20.359 05:45:41 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66161 00:05:20.359 05:45:41 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:20.359 05:45:41 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:20.359 05:45:41 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:20.359 SPDK target shutdown done 00:05:20.359 05:45:41 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:20.359 Success 00:05:20.359 05:45:41 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:20.359 00:05:20.359 real 0m1.750s 00:05:20.359 user 0m1.630s 00:05:20.359 sys 0m0.318s 00:05:20.359 05:45:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:20.359 05:45:41 -- common/autotest_common.sh@10 -- # set +x 00:05:20.359 ************************************ 00:05:20.359 END TEST json_config_extra_key 00:05:20.359 ************************************ 00:05:20.359 05:45:41 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:20.359 05:45:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.359 05:45:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.359 05:45:41 -- common/autotest_common.sh@10 -- # set +x 00:05:20.359 ************************************ 00:05:20.359 START TEST alias_rpc 00:05:20.359 ************************************ 00:05:20.359 05:45:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:20.618 * Looking for test storage... 00:05:20.618 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:20.618 05:45:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:20.618 05:45:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:20.618 05:45:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:20.618 05:45:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:20.618 05:45:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:20.618 05:45:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:20.618 05:45:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:20.618 05:45:42 -- scripts/common.sh@335 -- # IFS=.-: 00:05:20.618 05:45:42 -- scripts/common.sh@335 -- # read -ra ver1 00:05:20.618 05:45:42 -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.618 05:45:42 -- scripts/common.sh@336 -- # read -ra ver2 00:05:20.618 05:45:42 -- scripts/common.sh@337 -- # local 'op=<' 00:05:20.618 05:45:42 -- scripts/common.sh@339 -- # ver1_l=2 00:05:20.618 05:45:42 -- scripts/common.sh@340 -- # ver2_l=1 00:05:20.618 05:45:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:20.618 05:45:42 -- scripts/common.sh@343 -- # case "$op" in 00:05:20.618 05:45:42 -- scripts/common.sh@344 -- # : 1 00:05:20.618 05:45:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:20.619 05:45:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.619 05:45:42 -- scripts/common.sh@364 -- # decimal 1 00:05:20.619 05:45:42 -- scripts/common.sh@352 -- # local d=1 00:05:20.619 05:45:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.619 05:45:42 -- scripts/common.sh@354 -- # echo 1 00:05:20.619 05:45:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:20.619 05:45:42 -- scripts/common.sh@365 -- # decimal 2 00:05:20.619 05:45:42 -- scripts/common.sh@352 -- # local d=2 00:05:20.619 05:45:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.619 05:45:42 -- scripts/common.sh@354 -- # echo 2 00:05:20.619 05:45:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:20.619 05:45:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:20.619 05:45:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:20.619 05:45:42 -- scripts/common.sh@367 -- # return 0 00:05:20.619 05:45:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.619 05:45:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:20.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.619 --rc genhtml_branch_coverage=1 00:05:20.619 --rc genhtml_function_coverage=1 00:05:20.619 --rc genhtml_legend=1 00:05:20.619 --rc geninfo_all_blocks=1 00:05:20.619 --rc geninfo_unexecuted_blocks=1 00:05:20.619 00:05:20.619 ' 00:05:20.619 05:45:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:20.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.619 --rc genhtml_branch_coverage=1 00:05:20.619 --rc genhtml_function_coverage=1 00:05:20.619 --rc genhtml_legend=1 00:05:20.619 --rc geninfo_all_blocks=1 00:05:20.619 --rc geninfo_unexecuted_blocks=1 00:05:20.619 00:05:20.619 ' 00:05:20.619 05:45:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:20.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.619 --rc genhtml_branch_coverage=1 00:05:20.619 --rc genhtml_function_coverage=1 00:05:20.619 --rc genhtml_legend=1 00:05:20.619 --rc geninfo_all_blocks=1 00:05:20.619 --rc geninfo_unexecuted_blocks=1 00:05:20.619 00:05:20.619 ' 00:05:20.619 05:45:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:20.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.619 --rc genhtml_branch_coverage=1 00:05:20.619 --rc genhtml_function_coverage=1 00:05:20.619 --rc genhtml_legend=1 00:05:20.619 --rc geninfo_all_blocks=1 00:05:20.619 --rc geninfo_unexecuted_blocks=1 00:05:20.619 00:05:20.619 ' 00:05:20.619 05:45:42 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:20.619 05:45:42 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=66238 00:05:20.619 05:45:42 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:20.619 05:45:42 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 66238 00:05:20.619 05:45:42 -- common/autotest_common.sh@829 -- # '[' -z 66238 ']' 00:05:20.619 05:45:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.619 05:45:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.619 05:45:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.619 05:45:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.619 05:45:42 -- common/autotest_common.sh@10 -- # set +x 00:05:20.619 [2024-12-15 05:45:42.169953] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:20.619 [2024-12-15 05:45:42.170092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66238 ] 00:05:20.878 [2024-12-15 05:45:42.303538] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.878 [2024-12-15 05:45:42.337745] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:20.878 [2024-12-15 05:45:42.337961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.814 05:45:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.814 05:45:43 -- common/autotest_common.sh@862 -- # return 0 00:05:21.814 05:45:43 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:22.073 05:45:43 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 66238 00:05:22.074 05:45:43 -- common/autotest_common.sh@936 -- # '[' -z 66238 ']' 00:05:22.074 05:45:43 -- common/autotest_common.sh@940 -- # kill -0 66238 00:05:22.074 05:45:43 -- common/autotest_common.sh@941 -- # uname 00:05:22.074 05:45:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:22.074 05:45:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66238 00:05:22.074 05:45:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:22.074 05:45:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:22.074 killing process with pid 66238 00:05:22.074 05:45:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66238' 00:05:22.074 05:45:43 -- common/autotest_common.sh@955 -- # kill 66238 00:05:22.074 05:45:43 -- common/autotest_common.sh@960 -- # wait 66238 00:05:22.333 00:05:22.333 real 0m1.821s 00:05:22.333 user 0m2.220s 00:05:22.333 sys 0m0.343s 00:05:22.333 05:45:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.333 ************************************ 00:05:22.333 05:45:43 -- common/autotest_common.sh@10 -- # set +x 00:05:22.333 END TEST alias_rpc 00:05:22.333 ************************************ 00:05:22.333 05:45:43 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:05:22.333 05:45:43 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:22.333 05:45:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.333 05:45:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.333 05:45:43 -- common/autotest_common.sh@10 -- # set +x 00:05:22.333 ************************************ 00:05:22.333 START TEST spdkcli_tcp 00:05:22.333 ************************************ 00:05:22.333 05:45:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:22.333 * Looking for test storage... 00:05:22.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:22.333 05:45:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:22.333 05:45:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:22.333 05:45:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:22.333 05:45:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:22.333 05:45:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:22.333 05:45:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:22.333 05:45:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:22.333 05:45:43 -- scripts/common.sh@335 -- # IFS=.-: 00:05:22.333 05:45:43 -- scripts/common.sh@335 -- # read -ra ver1 00:05:22.333 05:45:43 -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.333 05:45:43 -- scripts/common.sh@336 -- # read -ra ver2 00:05:22.333 05:45:43 -- scripts/common.sh@337 -- # local 'op=<' 00:05:22.333 05:45:43 -- scripts/common.sh@339 -- # ver1_l=2 00:05:22.333 05:45:43 -- scripts/common.sh@340 -- # ver2_l=1 00:05:22.333 05:45:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:22.333 05:45:43 -- scripts/common.sh@343 -- # case "$op" in 00:05:22.333 05:45:43 -- scripts/common.sh@344 -- # : 1 00:05:22.333 05:45:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:22.333 05:45:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.333 05:45:43 -- scripts/common.sh@364 -- # decimal 1 00:05:22.333 05:45:43 -- scripts/common.sh@352 -- # local d=1 00:05:22.333 05:45:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.333 05:45:43 -- scripts/common.sh@354 -- # echo 1 00:05:22.333 05:45:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:22.333 05:45:43 -- scripts/common.sh@365 -- # decimal 2 00:05:22.333 05:45:43 -- scripts/common.sh@352 -- # local d=2 00:05:22.333 05:45:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.333 05:45:43 -- scripts/common.sh@354 -- # echo 2 00:05:22.333 05:45:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:22.333 05:45:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:22.333 05:45:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:22.333 05:45:43 -- scripts/common.sh@367 -- # return 0 00:05:22.333 05:45:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.333 05:45:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:22.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.333 --rc genhtml_branch_coverage=1 00:05:22.333 --rc genhtml_function_coverage=1 00:05:22.333 --rc genhtml_legend=1 00:05:22.333 --rc geninfo_all_blocks=1 00:05:22.333 --rc geninfo_unexecuted_blocks=1 00:05:22.333 00:05:22.333 ' 00:05:22.333 05:45:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:22.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.333 --rc genhtml_branch_coverage=1 00:05:22.333 --rc genhtml_function_coverage=1 00:05:22.333 --rc genhtml_legend=1 00:05:22.333 --rc geninfo_all_blocks=1 00:05:22.333 --rc geninfo_unexecuted_blocks=1 00:05:22.333 00:05:22.333 ' 00:05:22.333 05:45:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:22.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.333 --rc genhtml_branch_coverage=1 00:05:22.333 --rc genhtml_function_coverage=1 00:05:22.333 --rc genhtml_legend=1 00:05:22.333 --rc geninfo_all_blocks=1 00:05:22.333 --rc geninfo_unexecuted_blocks=1 00:05:22.333 00:05:22.333 ' 00:05:22.333 05:45:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:22.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.333 --rc genhtml_branch_coverage=1 00:05:22.333 --rc genhtml_function_coverage=1 00:05:22.333 --rc genhtml_legend=1 00:05:22.333 --rc geninfo_all_blocks=1 00:05:22.333 --rc geninfo_unexecuted_blocks=1 00:05:22.333 00:05:22.333 ' 00:05:22.333 05:45:43 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:22.333 05:45:43 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:22.333 05:45:43 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:22.333 05:45:43 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:22.333 05:45:43 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:22.333 05:45:43 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:22.333 05:45:43 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:22.333 05:45:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:22.333 05:45:43 -- common/autotest_common.sh@10 -- # set +x 00:05:22.333 05:45:43 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=66322 00:05:22.333 05:45:43 -- spdkcli/tcp.sh@27 -- # waitforlisten 66322 00:05:22.333 05:45:43 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:22.333 05:45:43 -- common/autotest_common.sh@829 -- # '[' -z 66322 ']' 00:05:22.593 05:45:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.593 05:45:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.593 05:45:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.593 05:45:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.593 05:45:43 -- common/autotest_common.sh@10 -- # set +x 00:05:22.593 [2024-12-15 05:45:44.032008] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:22.593 [2024-12-15 05:45:44.032130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66322 ] 00:05:22.593 [2024-12-15 05:45:44.168756] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.593 [2024-12-15 05:45:44.202888] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:22.593 [2024-12-15 05:45:44.203193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.593 [2024-12-15 05:45:44.203198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.531 05:45:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.531 05:45:45 -- common/autotest_common.sh@862 -- # return 0 00:05:23.531 05:45:45 -- spdkcli/tcp.sh@31 -- # socat_pid=66339 00:05:23.531 05:45:45 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:23.531 05:45:45 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:23.790 [ 00:05:23.790 "bdev_malloc_delete", 00:05:23.790 "bdev_malloc_create", 00:05:23.790 "bdev_null_resize", 00:05:23.790 "bdev_null_delete", 00:05:23.790 "bdev_null_create", 00:05:23.790 "bdev_nvme_cuse_unregister", 00:05:23.790 "bdev_nvme_cuse_register", 00:05:23.790 "bdev_opal_new_user", 00:05:23.790 "bdev_opal_set_lock_state", 00:05:23.790 "bdev_opal_delete", 00:05:23.790 "bdev_opal_get_info", 00:05:23.790 "bdev_opal_create", 00:05:23.790 "bdev_nvme_opal_revert", 00:05:23.790 "bdev_nvme_opal_init", 00:05:23.790 "bdev_nvme_send_cmd", 00:05:23.790 "bdev_nvme_get_path_iostat", 00:05:23.790 "bdev_nvme_get_mdns_discovery_info", 00:05:23.790 "bdev_nvme_stop_mdns_discovery", 00:05:23.790 "bdev_nvme_start_mdns_discovery", 00:05:23.790 "bdev_nvme_set_multipath_policy", 00:05:23.790 "bdev_nvme_set_preferred_path", 00:05:23.790 "bdev_nvme_get_io_paths", 00:05:23.790 "bdev_nvme_remove_error_injection", 00:05:23.790 "bdev_nvme_add_error_injection", 00:05:23.790 "bdev_nvme_get_discovery_info", 00:05:23.790 "bdev_nvme_stop_discovery", 00:05:23.790 "bdev_nvme_start_discovery", 00:05:23.790 "bdev_nvme_get_controller_health_info", 00:05:23.790 "bdev_nvme_disable_controller", 00:05:23.790 "bdev_nvme_enable_controller", 00:05:23.790 "bdev_nvme_reset_controller", 00:05:23.790 "bdev_nvme_get_transport_statistics", 00:05:23.790 "bdev_nvme_apply_firmware", 00:05:23.790 "bdev_nvme_detach_controller", 00:05:23.790 "bdev_nvme_get_controllers", 00:05:23.790 "bdev_nvme_attach_controller", 00:05:23.790 "bdev_nvme_set_hotplug", 00:05:23.790 "bdev_nvme_set_options", 00:05:23.790 "bdev_passthru_delete", 00:05:23.790 "bdev_passthru_create", 00:05:23.790 "bdev_lvol_grow_lvstore", 00:05:23.790 "bdev_lvol_get_lvols", 00:05:23.790 "bdev_lvol_get_lvstores", 00:05:23.790 "bdev_lvol_delete", 00:05:23.790 "bdev_lvol_set_read_only", 00:05:23.790 "bdev_lvol_resize", 00:05:23.790 "bdev_lvol_decouple_parent", 00:05:23.790 "bdev_lvol_inflate", 00:05:23.790 "bdev_lvol_rename", 00:05:23.790 "bdev_lvol_clone_bdev", 00:05:23.790 "bdev_lvol_clone", 00:05:23.790 "bdev_lvol_snapshot", 00:05:23.790 "bdev_lvol_create", 00:05:23.790 "bdev_lvol_delete_lvstore", 00:05:23.790 "bdev_lvol_rename_lvstore", 00:05:23.790 "bdev_lvol_create_lvstore", 00:05:23.790 "bdev_raid_set_options", 00:05:23.790 "bdev_raid_remove_base_bdev", 00:05:23.790 "bdev_raid_add_base_bdev", 00:05:23.790 "bdev_raid_delete", 00:05:23.790 "bdev_raid_create", 00:05:23.790 "bdev_raid_get_bdevs", 00:05:23.790 "bdev_error_inject_error", 00:05:23.790 "bdev_error_delete", 00:05:23.790 "bdev_error_create", 00:05:23.790 "bdev_split_delete", 00:05:23.790 "bdev_split_create", 00:05:23.790 "bdev_delay_delete", 00:05:23.790 "bdev_delay_create", 00:05:23.790 "bdev_delay_update_latency", 00:05:23.790 "bdev_zone_block_delete", 00:05:23.790 "bdev_zone_block_create", 00:05:23.790 "blobfs_create", 00:05:23.790 "blobfs_detect", 00:05:23.790 "blobfs_set_cache_size", 00:05:23.790 "bdev_aio_delete", 00:05:23.790 "bdev_aio_rescan", 00:05:23.790 "bdev_aio_create", 00:05:23.791 "bdev_ftl_set_property", 00:05:23.791 "bdev_ftl_get_properties", 00:05:23.791 "bdev_ftl_get_stats", 00:05:23.791 "bdev_ftl_unmap", 00:05:23.791 "bdev_ftl_unload", 00:05:23.791 "bdev_ftl_delete", 00:05:23.791 "bdev_ftl_load", 00:05:23.791 "bdev_ftl_create", 00:05:23.791 "bdev_virtio_attach_controller", 00:05:23.791 "bdev_virtio_scsi_get_devices", 00:05:23.791 "bdev_virtio_detach_controller", 00:05:23.791 "bdev_virtio_blk_set_hotplug", 00:05:23.791 "bdev_iscsi_delete", 00:05:23.791 "bdev_iscsi_create", 00:05:23.791 "bdev_iscsi_set_options", 00:05:23.791 "bdev_uring_delete", 00:05:23.791 "bdev_uring_create", 00:05:23.791 "accel_error_inject_error", 00:05:23.791 "ioat_scan_accel_module", 00:05:23.791 "dsa_scan_accel_module", 00:05:23.791 "iaa_scan_accel_module", 00:05:23.791 "iscsi_set_options", 00:05:23.791 "iscsi_get_auth_groups", 00:05:23.791 "iscsi_auth_group_remove_secret", 00:05:23.791 "iscsi_auth_group_add_secret", 00:05:23.791 "iscsi_delete_auth_group", 00:05:23.791 "iscsi_create_auth_group", 00:05:23.791 "iscsi_set_discovery_auth", 00:05:23.791 "iscsi_get_options", 00:05:23.791 "iscsi_target_node_request_logout", 00:05:23.791 "iscsi_target_node_set_redirect", 00:05:23.791 "iscsi_target_node_set_auth", 00:05:23.791 "iscsi_target_node_add_lun", 00:05:23.791 "iscsi_get_connections", 00:05:23.791 "iscsi_portal_group_set_auth", 00:05:23.791 "iscsi_start_portal_group", 00:05:23.791 "iscsi_delete_portal_group", 00:05:23.791 "iscsi_create_portal_group", 00:05:23.791 "iscsi_get_portal_groups", 00:05:23.791 "iscsi_delete_target_node", 00:05:23.791 "iscsi_target_node_remove_pg_ig_maps", 00:05:23.791 "iscsi_target_node_add_pg_ig_maps", 00:05:23.791 "iscsi_create_target_node", 00:05:23.791 "iscsi_get_target_nodes", 00:05:23.791 "iscsi_delete_initiator_group", 00:05:23.791 "iscsi_initiator_group_remove_initiators", 00:05:23.791 "iscsi_initiator_group_add_initiators", 00:05:23.791 "iscsi_create_initiator_group", 00:05:23.791 "iscsi_get_initiator_groups", 00:05:23.791 "nvmf_set_crdt", 00:05:23.791 "nvmf_set_config", 00:05:23.791 "nvmf_set_max_subsystems", 00:05:23.791 "nvmf_subsystem_get_listeners", 00:05:23.791 "nvmf_subsystem_get_qpairs", 00:05:23.791 "nvmf_subsystem_get_controllers", 00:05:23.791 "nvmf_get_stats", 00:05:23.791 "nvmf_get_transports", 00:05:23.791 "nvmf_create_transport", 00:05:23.791 "nvmf_get_targets", 00:05:23.791 "nvmf_delete_target", 00:05:23.791 "nvmf_create_target", 00:05:23.791 "nvmf_subsystem_allow_any_host", 00:05:23.791 "nvmf_subsystem_remove_host", 00:05:23.791 "nvmf_subsystem_add_host", 00:05:23.791 "nvmf_subsystem_remove_ns", 00:05:23.791 "nvmf_subsystem_add_ns", 00:05:23.791 "nvmf_subsystem_listener_set_ana_state", 00:05:23.791 "nvmf_discovery_get_referrals", 00:05:23.791 "nvmf_discovery_remove_referral", 00:05:23.791 "nvmf_discovery_add_referral", 00:05:23.791 "nvmf_subsystem_remove_listener", 00:05:23.791 "nvmf_subsystem_add_listener", 00:05:23.791 "nvmf_delete_subsystem", 00:05:23.791 "nvmf_create_subsystem", 00:05:23.791 "nvmf_get_subsystems", 00:05:23.791 "env_dpdk_get_mem_stats", 00:05:23.791 "nbd_get_disks", 00:05:23.791 "nbd_stop_disk", 00:05:23.791 "nbd_start_disk", 00:05:23.791 "ublk_recover_disk", 00:05:23.791 "ublk_get_disks", 00:05:23.791 "ublk_stop_disk", 00:05:23.791 "ublk_start_disk", 00:05:23.791 "ublk_destroy_target", 00:05:23.791 "ublk_create_target", 00:05:23.791 "virtio_blk_create_transport", 00:05:23.791 "virtio_blk_get_transports", 00:05:23.791 "vhost_controller_set_coalescing", 00:05:23.791 "vhost_get_controllers", 00:05:23.791 "vhost_delete_controller", 00:05:23.791 "vhost_create_blk_controller", 00:05:23.791 "vhost_scsi_controller_remove_target", 00:05:23.791 "vhost_scsi_controller_add_target", 00:05:23.791 "vhost_start_scsi_controller", 00:05:23.791 "vhost_create_scsi_controller", 00:05:23.791 "thread_set_cpumask", 00:05:23.791 "framework_get_scheduler", 00:05:23.791 "framework_set_scheduler", 00:05:23.791 "framework_get_reactors", 00:05:23.791 "thread_get_io_channels", 00:05:23.791 "thread_get_pollers", 00:05:23.791 "thread_get_stats", 00:05:23.791 "framework_monitor_context_switch", 00:05:23.791 "spdk_kill_instance", 00:05:23.791 "log_enable_timestamps", 00:05:23.791 "log_get_flags", 00:05:23.791 "log_clear_flag", 00:05:23.791 "log_set_flag", 00:05:23.791 "log_get_level", 00:05:23.791 "log_set_level", 00:05:23.791 "log_get_print_level", 00:05:23.791 "log_set_print_level", 00:05:23.791 "framework_enable_cpumask_locks", 00:05:23.791 "framework_disable_cpumask_locks", 00:05:23.791 "framework_wait_init", 00:05:23.791 "framework_start_init", 00:05:23.791 "scsi_get_devices", 00:05:23.791 "bdev_get_histogram", 00:05:23.791 "bdev_enable_histogram", 00:05:23.791 "bdev_set_qos_limit", 00:05:23.791 "bdev_set_qd_sampling_period", 00:05:23.791 "bdev_get_bdevs", 00:05:23.791 "bdev_reset_iostat", 00:05:23.791 "bdev_get_iostat", 00:05:23.791 "bdev_examine", 00:05:23.791 "bdev_wait_for_examine", 00:05:23.791 "bdev_set_options", 00:05:23.791 "notify_get_notifications", 00:05:23.791 "notify_get_types", 00:05:23.791 "accel_get_stats", 00:05:23.791 "accel_set_options", 00:05:23.791 "accel_set_driver", 00:05:23.791 "accel_crypto_key_destroy", 00:05:23.791 "accel_crypto_keys_get", 00:05:23.791 "accel_crypto_key_create", 00:05:23.791 "accel_assign_opc", 00:05:23.791 "accel_get_module_info", 00:05:23.791 "accel_get_opc_assignments", 00:05:23.791 "vmd_rescan", 00:05:23.791 "vmd_remove_device", 00:05:23.791 "vmd_enable", 00:05:23.791 "sock_set_default_impl", 00:05:23.791 "sock_impl_set_options", 00:05:23.791 "sock_impl_get_options", 00:05:23.791 "iobuf_get_stats", 00:05:23.791 "iobuf_set_options", 00:05:23.791 "framework_get_pci_devices", 00:05:23.791 "framework_get_config", 00:05:23.791 "framework_get_subsystems", 00:05:23.791 "trace_get_info", 00:05:23.791 "trace_get_tpoint_group_mask", 00:05:23.791 "trace_disable_tpoint_group", 00:05:23.791 "trace_enable_tpoint_group", 00:05:23.791 "trace_clear_tpoint_mask", 00:05:23.791 "trace_set_tpoint_mask", 00:05:23.791 "spdk_get_version", 00:05:23.791 "rpc_get_methods" 00:05:23.791 ] 00:05:23.791 05:45:45 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:23.791 05:45:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:23.791 05:45:45 -- common/autotest_common.sh@10 -- # set +x 00:05:23.791 05:45:45 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:23.791 05:45:45 -- spdkcli/tcp.sh@38 -- # killprocess 66322 00:05:23.791 05:45:45 -- common/autotest_common.sh@936 -- # '[' -z 66322 ']' 00:05:23.791 05:45:45 -- common/autotest_common.sh@940 -- # kill -0 66322 00:05:23.791 05:45:45 -- common/autotest_common.sh@941 -- # uname 00:05:23.791 05:45:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:23.791 05:45:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66322 00:05:23.791 05:45:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:23.791 05:45:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:23.791 killing process with pid 66322 00:05:23.791 05:45:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66322' 00:05:23.791 05:45:45 -- common/autotest_common.sh@955 -- # kill 66322 00:05:23.791 05:45:45 -- common/autotest_common.sh@960 -- # wait 66322 00:05:24.050 ************************************ 00:05:24.050 END TEST spdkcli_tcp 00:05:24.050 ************************************ 00:05:24.050 00:05:24.050 real 0m1.825s 00:05:24.050 user 0m3.568s 00:05:24.050 sys 0m0.388s 00:05:24.050 05:45:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:24.050 05:45:45 -- common/autotest_common.sh@10 -- # set +x 00:05:24.050 05:45:45 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:24.050 05:45:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:24.050 05:45:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.050 05:45:45 -- common/autotest_common.sh@10 -- # set +x 00:05:24.050 ************************************ 00:05:24.050 START TEST dpdk_mem_utility 00:05:24.050 ************************************ 00:05:24.050 05:45:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:24.309 * Looking for test storage... 00:05:24.309 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:24.309 05:45:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:24.309 05:45:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:24.309 05:45:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:24.309 05:45:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:24.309 05:45:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:24.309 05:45:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:24.309 05:45:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:24.309 05:45:45 -- scripts/common.sh@335 -- # IFS=.-: 00:05:24.309 05:45:45 -- scripts/common.sh@335 -- # read -ra ver1 00:05:24.309 05:45:45 -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.309 05:45:45 -- scripts/common.sh@336 -- # read -ra ver2 00:05:24.309 05:45:45 -- scripts/common.sh@337 -- # local 'op=<' 00:05:24.309 05:45:45 -- scripts/common.sh@339 -- # ver1_l=2 00:05:24.309 05:45:45 -- scripts/common.sh@340 -- # ver2_l=1 00:05:24.309 05:45:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:24.309 05:45:45 -- scripts/common.sh@343 -- # case "$op" in 00:05:24.309 05:45:45 -- scripts/common.sh@344 -- # : 1 00:05:24.309 05:45:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:24.309 05:45:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.309 05:45:45 -- scripts/common.sh@364 -- # decimal 1 00:05:24.309 05:45:45 -- scripts/common.sh@352 -- # local d=1 00:05:24.309 05:45:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.309 05:45:45 -- scripts/common.sh@354 -- # echo 1 00:05:24.309 05:45:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:24.309 05:45:45 -- scripts/common.sh@365 -- # decimal 2 00:05:24.309 05:45:45 -- scripts/common.sh@352 -- # local d=2 00:05:24.309 05:45:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.309 05:45:45 -- scripts/common.sh@354 -- # echo 2 00:05:24.309 05:45:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:24.309 05:45:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:24.309 05:45:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:24.309 05:45:45 -- scripts/common.sh@367 -- # return 0 00:05:24.309 05:45:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.309 05:45:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:24.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.309 --rc genhtml_branch_coverage=1 00:05:24.309 --rc genhtml_function_coverage=1 00:05:24.309 --rc genhtml_legend=1 00:05:24.309 --rc geninfo_all_blocks=1 00:05:24.309 --rc geninfo_unexecuted_blocks=1 00:05:24.309 00:05:24.309 ' 00:05:24.309 05:45:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:24.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.309 --rc genhtml_branch_coverage=1 00:05:24.309 --rc genhtml_function_coverage=1 00:05:24.309 --rc genhtml_legend=1 00:05:24.309 --rc geninfo_all_blocks=1 00:05:24.309 --rc geninfo_unexecuted_blocks=1 00:05:24.309 00:05:24.309 ' 00:05:24.309 05:45:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:24.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.309 --rc genhtml_branch_coverage=1 00:05:24.309 --rc genhtml_function_coverage=1 00:05:24.309 --rc genhtml_legend=1 00:05:24.309 --rc geninfo_all_blocks=1 00:05:24.309 --rc geninfo_unexecuted_blocks=1 00:05:24.309 00:05:24.309 ' 00:05:24.309 05:45:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:24.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.309 --rc genhtml_branch_coverage=1 00:05:24.309 --rc genhtml_function_coverage=1 00:05:24.309 --rc genhtml_legend=1 00:05:24.309 --rc geninfo_all_blocks=1 00:05:24.309 --rc geninfo_unexecuted_blocks=1 00:05:24.309 00:05:24.309 ' 00:05:24.309 05:45:45 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:24.309 05:45:45 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=66420 00:05:24.309 05:45:45 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.309 05:45:45 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 66420 00:05:24.309 05:45:45 -- common/autotest_common.sh@829 -- # '[' -z 66420 ']' 00:05:24.309 05:45:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.309 05:45:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.309 05:45:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.309 05:45:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.309 05:45:45 -- common/autotest_common.sh@10 -- # set +x 00:05:24.309 [2024-12-15 05:45:45.909570] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:24.309 [2024-12-15 05:45:45.909842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66420 ] 00:05:24.568 [2024-12-15 05:45:46.039760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.568 [2024-12-15 05:45:46.074774] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:24.568 [2024-12-15 05:45:46.075015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.507 05:45:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.507 05:45:46 -- common/autotest_common.sh@862 -- # return 0 00:05:25.507 05:45:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:25.507 05:45:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:25.507 05:45:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.507 05:45:46 -- common/autotest_common.sh@10 -- # set +x 00:05:25.507 { 00:05:25.507 "filename": "/tmp/spdk_mem_dump.txt" 00:05:25.507 } 00:05:25.507 05:45:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.507 05:45:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:25.507 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:25.507 1 heaps totaling size 814.000000 MiB 00:05:25.507 size: 814.000000 MiB heap id: 0 00:05:25.507 end heaps---------- 00:05:25.507 8 mempools totaling size 598.116089 MiB 00:05:25.507 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:25.507 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:25.507 size: 84.521057 MiB name: bdev_io_66420 00:05:25.507 size: 51.011292 MiB name: evtpool_66420 00:05:25.507 size: 50.003479 MiB name: msgpool_66420 00:05:25.507 size: 21.763794 MiB name: PDU_Pool 00:05:25.507 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:25.507 size: 0.026123 MiB name: Session_Pool 00:05:25.507 end mempools------- 00:05:25.507 6 memzones totaling size 4.142822 MiB 00:05:25.507 size: 1.000366 MiB name: RG_ring_0_66420 00:05:25.507 size: 1.000366 MiB name: RG_ring_1_66420 00:05:25.507 size: 1.000366 MiB name: RG_ring_4_66420 00:05:25.507 size: 1.000366 MiB name: RG_ring_5_66420 00:05:25.507 size: 0.125366 MiB name: RG_ring_2_66420 00:05:25.507 size: 0.015991 MiB name: RG_ring_3_66420 00:05:25.507 end memzones------- 00:05:25.507 05:45:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:25.507 heap id: 0 total size: 814.000000 MiB number of busy elements: 305 number of free elements: 15 00:05:25.507 list of free elements. size: 12.471008 MiB 00:05:25.507 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:25.507 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:25.507 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:25.507 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:25.507 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:25.507 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:25.507 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:25.507 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:25.507 element at address: 0x200000200000 with size: 0.832825 MiB 00:05:25.507 element at address: 0x20001aa00000 with size: 0.568787 MiB 00:05:25.507 element at address: 0x20000b200000 with size: 0.488892 MiB 00:05:25.507 element at address: 0x200000800000 with size: 0.486145 MiB 00:05:25.507 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:25.507 element at address: 0x200027e00000 with size: 0.395752 MiB 00:05:25.508 element at address: 0x200003a00000 with size: 0.347839 MiB 00:05:25.508 list of standard malloc elements. size: 199.266418 MiB 00:05:25.508 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:25.508 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:25.508 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:25.508 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:25.508 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:25.508 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:25.508 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:25.508 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:25.508 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:25.508 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20000087c740 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20000087c800 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20000087c980 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a59180 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a59240 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a59300 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a59480 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a59540 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a59600 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a59780 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a59840 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a59900 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:25.508 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:25.508 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:25.508 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:25.509 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e65500 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:25.509 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:25.509 list of memzone associated elements. size: 602.262573 MiB 00:05:25.509 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:25.509 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:25.509 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:25.509 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:25.509 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:25.509 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_66420_0 00:05:25.509 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:25.510 associated memzone info: size: 48.002930 MiB name: MP_evtpool_66420_0 00:05:25.510 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:25.510 associated memzone info: size: 48.002930 MiB name: MP_msgpool_66420_0 00:05:25.510 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:25.510 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:25.510 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:25.510 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:25.510 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:25.510 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_66420 00:05:25.510 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:25.510 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_66420 00:05:25.510 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:25.510 associated memzone info: size: 1.007996 MiB name: MP_evtpool_66420 00:05:25.510 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:25.510 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:25.510 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:25.510 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:25.510 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:25.510 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:25.510 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:25.510 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:25.510 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:25.510 associated memzone info: size: 1.000366 MiB name: RG_ring_0_66420 00:05:25.510 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:25.510 associated memzone info: size: 1.000366 MiB name: RG_ring_1_66420 00:05:25.510 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:25.510 associated memzone info: size: 1.000366 MiB name: RG_ring_4_66420 00:05:25.510 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:25.510 associated memzone info: size: 1.000366 MiB name: RG_ring_5_66420 00:05:25.510 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:25.510 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_66420 00:05:25.510 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:25.510 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:25.510 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:25.510 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:25.510 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:25.510 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:25.510 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:25.510 associated memzone info: size: 0.125366 MiB name: RG_ring_2_66420 00:05:25.510 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:25.510 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:25.510 element at address: 0x200027e65680 with size: 0.023743 MiB 00:05:25.510 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:25.510 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:25.510 associated memzone info: size: 0.015991 MiB name: RG_ring_3_66420 00:05:25.510 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:05:25.510 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:25.510 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:25.510 associated memzone info: size: 0.000183 MiB name: MP_msgpool_66420 00:05:25.510 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:25.510 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_66420 00:05:25.510 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:05:25.510 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:25.510 05:45:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:25.510 05:45:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 66420 00:05:25.510 05:45:47 -- common/autotest_common.sh@936 -- # '[' -z 66420 ']' 00:05:25.510 05:45:47 -- common/autotest_common.sh@940 -- # kill -0 66420 00:05:25.510 05:45:47 -- common/autotest_common.sh@941 -- # uname 00:05:25.510 05:45:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:25.510 05:45:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66420 00:05:25.510 05:45:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:25.510 killing process with pid 66420 00:05:25.510 05:45:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:25.510 05:45:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66420' 00:05:25.510 05:45:47 -- common/autotest_common.sh@955 -- # kill 66420 00:05:25.510 05:45:47 -- common/autotest_common.sh@960 -- # wait 66420 00:05:25.769 00:05:25.769 real 0m1.619s 00:05:25.769 user 0m1.847s 00:05:25.769 sys 0m0.337s 00:05:25.769 ************************************ 00:05:25.769 END TEST dpdk_mem_utility 00:05:25.769 ************************************ 00:05:25.769 05:45:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:25.769 05:45:47 -- common/autotest_common.sh@10 -- # set +x 00:05:25.769 05:45:47 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:25.769 05:45:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:25.769 05:45:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.769 05:45:47 -- common/autotest_common.sh@10 -- # set +x 00:05:25.769 ************************************ 00:05:25.769 START TEST event 00:05:25.769 ************************************ 00:05:25.769 05:45:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:26.028 * Looking for test storage... 00:05:26.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:26.028 05:45:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:26.028 05:45:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:26.028 05:45:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:26.028 05:45:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:26.028 05:45:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:26.028 05:45:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:26.028 05:45:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:26.028 05:45:47 -- scripts/common.sh@335 -- # IFS=.-: 00:05:26.028 05:45:47 -- scripts/common.sh@335 -- # read -ra ver1 00:05:26.028 05:45:47 -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.028 05:45:47 -- scripts/common.sh@336 -- # read -ra ver2 00:05:26.028 05:45:47 -- scripts/common.sh@337 -- # local 'op=<' 00:05:26.028 05:45:47 -- scripts/common.sh@339 -- # ver1_l=2 00:05:26.028 05:45:47 -- scripts/common.sh@340 -- # ver2_l=1 00:05:26.028 05:45:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:26.028 05:45:47 -- scripts/common.sh@343 -- # case "$op" in 00:05:26.028 05:45:47 -- scripts/common.sh@344 -- # : 1 00:05:26.028 05:45:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:26.028 05:45:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.028 05:45:47 -- scripts/common.sh@364 -- # decimal 1 00:05:26.028 05:45:47 -- scripts/common.sh@352 -- # local d=1 00:05:26.028 05:45:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.028 05:45:47 -- scripts/common.sh@354 -- # echo 1 00:05:26.028 05:45:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:26.028 05:45:47 -- scripts/common.sh@365 -- # decimal 2 00:05:26.028 05:45:47 -- scripts/common.sh@352 -- # local d=2 00:05:26.028 05:45:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.028 05:45:47 -- scripts/common.sh@354 -- # echo 2 00:05:26.028 05:45:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:26.028 05:45:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:26.028 05:45:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:26.028 05:45:47 -- scripts/common.sh@367 -- # return 0 00:05:26.028 05:45:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.028 05:45:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:26.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.028 --rc genhtml_branch_coverage=1 00:05:26.028 --rc genhtml_function_coverage=1 00:05:26.028 --rc genhtml_legend=1 00:05:26.028 --rc geninfo_all_blocks=1 00:05:26.028 --rc geninfo_unexecuted_blocks=1 00:05:26.028 00:05:26.028 ' 00:05:26.028 05:45:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:26.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.028 --rc genhtml_branch_coverage=1 00:05:26.028 --rc genhtml_function_coverage=1 00:05:26.028 --rc genhtml_legend=1 00:05:26.028 --rc geninfo_all_blocks=1 00:05:26.028 --rc geninfo_unexecuted_blocks=1 00:05:26.028 00:05:26.028 ' 00:05:26.028 05:45:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:26.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.028 --rc genhtml_branch_coverage=1 00:05:26.028 --rc genhtml_function_coverage=1 00:05:26.028 --rc genhtml_legend=1 00:05:26.028 --rc geninfo_all_blocks=1 00:05:26.029 --rc geninfo_unexecuted_blocks=1 00:05:26.029 00:05:26.029 ' 00:05:26.029 05:45:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:26.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.029 --rc genhtml_branch_coverage=1 00:05:26.029 --rc genhtml_function_coverage=1 00:05:26.029 --rc genhtml_legend=1 00:05:26.029 --rc geninfo_all_blocks=1 00:05:26.029 --rc geninfo_unexecuted_blocks=1 00:05:26.029 00:05:26.029 ' 00:05:26.029 05:45:47 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:26.029 05:45:47 -- bdev/nbd_common.sh@6 -- # set -e 00:05:26.029 05:45:47 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:26.029 05:45:47 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:26.029 05:45:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.029 05:45:47 -- common/autotest_common.sh@10 -- # set +x 00:05:26.029 ************************************ 00:05:26.029 START TEST event_perf 00:05:26.029 ************************************ 00:05:26.029 05:45:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:26.029 Running I/O for 1 seconds...[2024-12-15 05:45:47.558305] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:26.029 [2024-12-15 05:45:47.558553] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66493 ] 00:05:26.288 [2024-12-15 05:45:47.693205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:26.288 [2024-12-15 05:45:47.733082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.288 [2024-12-15 05:45:47.733220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.288 [2024-12-15 05:45:47.733365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.288 [2024-12-15 05:45:47.733368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.224 Running I/O for 1 seconds... 00:05:27.224 lcore 0: 189712 00:05:27.224 lcore 1: 189712 00:05:27.224 lcore 2: 189712 00:05:27.224 lcore 3: 189712 00:05:27.224 done. 00:05:27.224 ************************************ 00:05:27.224 END TEST event_perf 00:05:27.224 ************************************ 00:05:27.224 00:05:27.224 real 0m1.252s 00:05:27.224 user 0m4.083s 00:05:27.224 sys 0m0.045s 00:05:27.224 05:45:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:27.224 05:45:48 -- common/autotest_common.sh@10 -- # set +x 00:05:27.224 05:45:48 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:27.224 05:45:48 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:27.224 05:45:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.224 05:45:48 -- common/autotest_common.sh@10 -- # set +x 00:05:27.224 ************************************ 00:05:27.224 START TEST event_reactor 00:05:27.224 ************************************ 00:05:27.224 05:45:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:27.224 [2024-12-15 05:45:48.854181] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:27.224 [2024-12-15 05:45:48.854596] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66526 ] 00:05:27.483 [2024-12-15 05:45:48.992260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.483 [2024-12-15 05:45:49.028121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.862 test_start 00:05:28.862 oneshot 00:05:28.862 tick 100 00:05:28.862 tick 100 00:05:28.862 tick 250 00:05:28.862 tick 100 00:05:28.862 tick 100 00:05:28.862 tick 100 00:05:28.862 tick 250 00:05:28.862 tick 500 00:05:28.862 tick 100 00:05:28.862 tick 100 00:05:28.862 tick 250 00:05:28.862 tick 100 00:05:28.862 tick 100 00:05:28.862 test_end 00:05:28.862 ************************************ 00:05:28.862 END TEST event_reactor 00:05:28.862 ************************************ 00:05:28.862 00:05:28.862 real 0m1.238s 00:05:28.862 user 0m1.093s 00:05:28.862 sys 0m0.039s 00:05:28.862 05:45:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:28.862 05:45:50 -- common/autotest_common.sh@10 -- # set +x 00:05:28.862 05:45:50 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:28.862 05:45:50 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:28.862 05:45:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.862 05:45:50 -- common/autotest_common.sh@10 -- # set +x 00:05:28.862 ************************************ 00:05:28.862 START TEST event_reactor_perf 00:05:28.862 ************************************ 00:05:28.862 05:45:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:28.862 [2024-12-15 05:45:50.148534] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:28.862 [2024-12-15 05:45:50.148785] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66567 ] 00:05:28.862 [2024-12-15 05:45:50.285070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.862 [2024-12-15 05:45:50.319912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.800 test_start 00:05:29.800 test_end 00:05:29.800 Performance: 394831 events per second 00:05:29.800 ************************************ 00:05:29.800 END TEST event_reactor_perf 00:05:29.800 ************************************ 00:05:29.800 00:05:29.800 real 0m1.243s 00:05:29.800 user 0m1.095s 00:05:29.800 sys 0m0.041s 00:05:29.800 05:45:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:29.800 05:45:51 -- common/autotest_common.sh@10 -- # set +x 00:05:29.800 05:45:51 -- event/event.sh@49 -- # uname -s 00:05:29.800 05:45:51 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:29.800 05:45:51 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:29.800 05:45:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.800 05:45:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.800 05:45:51 -- common/autotest_common.sh@10 -- # set +x 00:05:29.800 ************************************ 00:05:29.800 START TEST event_scheduler 00:05:29.800 ************************************ 00:05:29.800 05:45:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:30.059 * Looking for test storage... 00:05:30.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:30.059 05:45:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:30.059 05:45:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:30.059 05:45:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:30.059 05:45:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:30.059 05:45:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:30.059 05:45:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:30.059 05:45:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:30.059 05:45:51 -- scripts/common.sh@335 -- # IFS=.-: 00:05:30.059 05:45:51 -- scripts/common.sh@335 -- # read -ra ver1 00:05:30.059 05:45:51 -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.059 05:45:51 -- scripts/common.sh@336 -- # read -ra ver2 00:05:30.059 05:45:51 -- scripts/common.sh@337 -- # local 'op=<' 00:05:30.059 05:45:51 -- scripts/common.sh@339 -- # ver1_l=2 00:05:30.059 05:45:51 -- scripts/common.sh@340 -- # ver2_l=1 00:05:30.059 05:45:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:30.059 05:45:51 -- scripts/common.sh@343 -- # case "$op" in 00:05:30.059 05:45:51 -- scripts/common.sh@344 -- # : 1 00:05:30.059 05:45:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:30.059 05:45:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.059 05:45:51 -- scripts/common.sh@364 -- # decimal 1 00:05:30.059 05:45:51 -- scripts/common.sh@352 -- # local d=1 00:05:30.059 05:45:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.059 05:45:51 -- scripts/common.sh@354 -- # echo 1 00:05:30.059 05:45:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:30.059 05:45:51 -- scripts/common.sh@365 -- # decimal 2 00:05:30.059 05:45:51 -- scripts/common.sh@352 -- # local d=2 00:05:30.059 05:45:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.059 05:45:51 -- scripts/common.sh@354 -- # echo 2 00:05:30.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.059 05:45:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:30.059 05:45:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:30.059 05:45:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:30.059 05:45:51 -- scripts/common.sh@367 -- # return 0 00:05:30.059 05:45:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.059 05:45:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:30.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.059 --rc genhtml_branch_coverage=1 00:05:30.059 --rc genhtml_function_coverage=1 00:05:30.059 --rc genhtml_legend=1 00:05:30.059 --rc geninfo_all_blocks=1 00:05:30.059 --rc geninfo_unexecuted_blocks=1 00:05:30.059 00:05:30.059 ' 00:05:30.059 05:45:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:30.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.059 --rc genhtml_branch_coverage=1 00:05:30.059 --rc genhtml_function_coverage=1 00:05:30.059 --rc genhtml_legend=1 00:05:30.059 --rc geninfo_all_blocks=1 00:05:30.059 --rc geninfo_unexecuted_blocks=1 00:05:30.059 00:05:30.059 ' 00:05:30.059 05:45:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:30.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.059 --rc genhtml_branch_coverage=1 00:05:30.059 --rc genhtml_function_coverage=1 00:05:30.059 --rc genhtml_legend=1 00:05:30.059 --rc geninfo_all_blocks=1 00:05:30.059 --rc geninfo_unexecuted_blocks=1 00:05:30.059 00:05:30.059 ' 00:05:30.059 05:45:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:30.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.059 --rc genhtml_branch_coverage=1 00:05:30.059 --rc genhtml_function_coverage=1 00:05:30.059 --rc genhtml_legend=1 00:05:30.059 --rc geninfo_all_blocks=1 00:05:30.059 --rc geninfo_unexecuted_blocks=1 00:05:30.059 00:05:30.059 ' 00:05:30.059 05:45:51 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:30.059 05:45:51 -- scheduler/scheduler.sh@35 -- # scheduler_pid=66630 00:05:30.059 05:45:51 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.059 05:45:51 -- scheduler/scheduler.sh@37 -- # waitforlisten 66630 00:05:30.059 05:45:51 -- common/autotest_common.sh@829 -- # '[' -z 66630 ']' 00:05:30.059 05:45:51 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:30.059 05:45:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.059 05:45:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.059 05:45:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.059 05:45:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.059 05:45:51 -- common/autotest_common.sh@10 -- # set +x 00:05:30.059 [2024-12-15 05:45:51.659814] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:30.059 [2024-12-15 05:45:51.660137] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66630 ] 00:05:30.318 [2024-12-15 05:45:51.799822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:30.318 [2024-12-15 05:45:51.844133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.318 [2024-12-15 05:45:51.844226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.318 [2024-12-15 05:45:51.844327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:30.318 [2024-12-15 05:45:51.844331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.318 05:45:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.318 05:45:51 -- common/autotest_common.sh@862 -- # return 0 00:05:30.318 05:45:51 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:30.318 05:45:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.318 05:45:51 -- common/autotest_common.sh@10 -- # set +x 00:05:30.318 POWER: Env isn't set yet! 00:05:30.318 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:30.318 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:30.318 POWER: Cannot set governor of lcore 0 to userspace 00:05:30.318 POWER: Attempting to initialise PSTAT power management... 00:05:30.318 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:30.318 POWER: Cannot set governor of lcore 0 to performance 00:05:30.318 POWER: Attempting to initialise CPPC power management... 00:05:30.318 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:30.318 POWER: Cannot set governor of lcore 0 to userspace 00:05:30.318 POWER: Attempting to initialise VM power management... 00:05:30.318 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:30.318 POWER: Unable to set Power Management Environment for lcore 0 00:05:30.318 [2024-12-15 05:45:51.917627] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:30.318 [2024-12-15 05:45:51.917639] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:30.318 [2024-12-15 05:45:51.917646] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:30.318 [2024-12-15 05:45:51.917656] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:30.318 [2024-12-15 05:45:51.917663] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:30.318 [2024-12-15 05:45:51.917685] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:30.318 05:45:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.318 05:45:51 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:30.318 05:45:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.318 05:45:51 -- common/autotest_common.sh@10 -- # set +x 00:05:30.578 [2024-12-15 05:45:51.971918] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:30.578 05:45:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.578 05:45:51 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:30.578 05:45:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:30.578 05:45:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.578 05:45:51 -- common/autotest_common.sh@10 -- # set +x 00:05:30.578 ************************************ 00:05:30.578 START TEST scheduler_create_thread 00:05:30.578 ************************************ 00:05:30.578 05:45:51 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:30.578 05:45:51 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:30.578 05:45:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.578 05:45:51 -- common/autotest_common.sh@10 -- # set +x 00:05:30.578 2 00:05:30.578 05:45:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.578 05:45:52 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:30.578 05:45:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.578 05:45:52 -- common/autotest_common.sh@10 -- # set +x 00:05:30.578 3 00:05:30.578 05:45:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.578 05:45:52 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:30.578 05:45:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.578 05:45:52 -- common/autotest_common.sh@10 -- # set +x 00:05:30.578 4 00:05:30.578 05:45:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.578 05:45:52 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:30.578 05:45:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.578 05:45:52 -- common/autotest_common.sh@10 -- # set +x 00:05:30.578 5 00:05:30.578 05:45:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.578 05:45:52 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:30.578 05:45:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.578 05:45:52 -- common/autotest_common.sh@10 -- # set +x 00:05:30.578 6 00:05:30.578 05:45:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.578 05:45:52 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:30.578 05:45:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.578 05:45:52 -- common/autotest_common.sh@10 -- # set +x 00:05:30.578 7 00:05:30.578 05:45:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.578 05:45:52 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:30.578 05:45:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.578 05:45:52 -- common/autotest_common.sh@10 -- # set +x 00:05:30.578 8 00:05:30.578 05:45:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.578 05:45:52 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:30.578 05:45:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.578 05:45:52 -- common/autotest_common.sh@10 -- # set +x 00:05:30.578 9 00:05:30.578 05:45:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.578 05:45:52 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:30.578 05:45:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.578 05:45:52 -- common/autotest_common.sh@10 -- # set +x 00:05:30.578 10 00:05:30.578 05:45:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.578 05:45:52 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:30.578 05:45:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.578 05:45:52 -- common/autotest_common.sh@10 -- # set +x 00:05:30.578 05:45:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.578 05:45:52 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:30.578 05:45:52 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:30.578 05:45:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.578 05:45:52 -- common/autotest_common.sh@10 -- # set +x 00:05:30.578 05:45:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.578 05:45:52 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:30.578 05:45:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.578 05:45:52 -- common/autotest_common.sh@10 -- # set +x 00:05:31.146 05:45:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.146 05:45:52 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:31.146 05:45:52 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:31.146 05:45:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.146 05:45:52 -- common/autotest_common.sh@10 -- # set +x 00:05:32.524 ************************************ 00:05:32.525 END TEST scheduler_create_thread 00:05:32.525 ************************************ 00:05:32.525 05:45:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.525 00:05:32.525 real 0m1.751s 00:05:32.525 user 0m0.018s 00:05:32.525 sys 0m0.006s 00:05:32.525 05:45:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.525 05:45:53 -- common/autotest_common.sh@10 -- # set +x 00:05:32.525 05:45:53 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:32.525 05:45:53 -- scheduler/scheduler.sh@46 -- # killprocess 66630 00:05:32.525 05:45:53 -- common/autotest_common.sh@936 -- # '[' -z 66630 ']' 00:05:32.525 05:45:53 -- common/autotest_common.sh@940 -- # kill -0 66630 00:05:32.525 05:45:53 -- common/autotest_common.sh@941 -- # uname 00:05:32.525 05:45:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:32.525 05:45:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66630 00:05:32.525 killing process with pid 66630 00:05:32.525 05:45:53 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:32.525 05:45:53 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:32.525 05:45:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66630' 00:05:32.525 05:45:53 -- common/autotest_common.sh@955 -- # kill 66630 00:05:32.525 05:45:53 -- common/autotest_common.sh@960 -- # wait 66630 00:05:32.784 [2024-12-15 05:45:54.218237] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:32.784 ************************************ 00:05:32.784 END TEST event_scheduler 00:05:32.784 ************************************ 00:05:32.784 00:05:32.784 real 0m2.924s 00:05:32.784 user 0m3.749s 00:05:32.784 sys 0m0.291s 00:05:32.784 05:45:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.784 05:45:54 -- common/autotest_common.sh@10 -- # set +x 00:05:32.784 05:45:54 -- event/event.sh@51 -- # modprobe -n nbd 00:05:32.784 05:45:54 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:32.784 05:45:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.784 05:45:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.784 05:45:54 -- common/autotest_common.sh@10 -- # set +x 00:05:32.784 ************************************ 00:05:32.784 START TEST app_repeat 00:05:32.784 ************************************ 00:05:32.784 05:45:54 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:32.784 05:45:54 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.784 05:45:54 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.784 05:45:54 -- event/event.sh@13 -- # local nbd_list 00:05:32.784 05:45:54 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.784 05:45:54 -- event/event.sh@14 -- # local bdev_list 00:05:32.784 05:45:54 -- event/event.sh@15 -- # local repeat_times=4 00:05:32.784 05:45:54 -- event/event.sh@17 -- # modprobe nbd 00:05:32.784 Process app_repeat pid: 66711 00:05:32.784 spdk_app_start Round 0 00:05:32.784 05:45:54 -- event/event.sh@19 -- # repeat_pid=66711 00:05:32.784 05:45:54 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.784 05:45:54 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:32.784 05:45:54 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 66711' 00:05:32.784 05:45:54 -- event/event.sh@23 -- # for i in {0..2} 00:05:32.784 05:45:54 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:32.784 05:45:54 -- event/event.sh@25 -- # waitforlisten 66711 /var/tmp/spdk-nbd.sock 00:05:32.784 05:45:54 -- common/autotest_common.sh@829 -- # '[' -z 66711 ']' 00:05:32.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.784 05:45:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.784 05:45:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.784 05:45:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.784 05:45:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.784 05:45:54 -- common/autotest_common.sh@10 -- # set +x 00:05:33.043 [2024-12-15 05:45:54.439742] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:33.043 [2024-12-15 05:45:54.439833] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66711 ] 00:05:33.043 [2024-12-15 05:45:54.575623] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.043 [2024-12-15 05:45:54.611776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.043 [2024-12-15 05:45:54.611784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.302 05:45:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.302 05:45:54 -- common/autotest_common.sh@862 -- # return 0 00:05:33.302 05:45:54 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.561 Malloc0 00:05:33.561 05:45:55 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.820 Malloc1 00:05:33.820 05:45:55 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.820 05:45:55 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.820 05:45:55 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.820 05:45:55 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:33.820 05:45:55 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.820 05:45:55 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:33.820 05:45:55 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.820 05:45:55 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.820 05:45:55 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.820 05:45:55 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:33.820 05:45:55 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.820 05:45:55 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:33.820 05:45:55 -- bdev/nbd_common.sh@12 -- # local i 00:05:33.820 05:45:55 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:33.820 05:45:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.820 05:45:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:34.119 /dev/nbd0 00:05:34.120 05:45:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:34.120 05:45:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:34.120 05:45:55 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:34.120 05:45:55 -- common/autotest_common.sh@867 -- # local i 00:05:34.120 05:45:55 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:34.120 05:45:55 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:34.120 05:45:55 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:34.120 05:45:55 -- common/autotest_common.sh@871 -- # break 00:05:34.120 05:45:55 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:34.120 05:45:55 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:34.120 05:45:55 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.120 1+0 records in 00:05:34.120 1+0 records out 00:05:34.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246963 s, 16.6 MB/s 00:05:34.120 05:45:55 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.120 05:45:55 -- common/autotest_common.sh@884 -- # size=4096 00:05:34.120 05:45:55 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.120 05:45:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:34.120 05:45:55 -- common/autotest_common.sh@887 -- # return 0 00:05:34.120 05:45:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.120 05:45:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.120 05:45:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:34.379 /dev/nbd1 00:05:34.379 05:45:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:34.379 05:45:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:34.380 05:45:55 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:34.380 05:45:55 -- common/autotest_common.sh@867 -- # local i 00:05:34.380 05:45:55 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:34.380 05:45:55 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:34.380 05:45:55 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:34.380 05:45:55 -- common/autotest_common.sh@871 -- # break 00:05:34.380 05:45:55 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:34.380 05:45:55 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:34.380 05:45:55 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.380 1+0 records in 00:05:34.380 1+0 records out 00:05:34.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263674 s, 15.5 MB/s 00:05:34.380 05:45:55 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.380 05:45:55 -- common/autotest_common.sh@884 -- # size=4096 00:05:34.380 05:45:55 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.380 05:45:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:34.380 05:45:55 -- common/autotest_common.sh@887 -- # return 0 00:05:34.380 05:45:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.380 05:45:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.380 05:45:55 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.380 05:45:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.380 05:45:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.639 05:45:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:34.639 { 00:05:34.639 "nbd_device": "/dev/nbd0", 00:05:34.639 "bdev_name": "Malloc0" 00:05:34.639 }, 00:05:34.639 { 00:05:34.639 "nbd_device": "/dev/nbd1", 00:05:34.639 "bdev_name": "Malloc1" 00:05:34.639 } 00:05:34.639 ]' 00:05:34.639 05:45:56 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:34.639 { 00:05:34.639 "nbd_device": "/dev/nbd0", 00:05:34.639 "bdev_name": "Malloc0" 00:05:34.639 }, 00:05:34.639 { 00:05:34.639 "nbd_device": "/dev/nbd1", 00:05:34.639 "bdev_name": "Malloc1" 00:05:34.639 } 00:05:34.639 ]' 00:05:34.639 05:45:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.639 05:45:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:34.639 /dev/nbd1' 00:05:34.639 05:45:56 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:34.639 /dev/nbd1' 00:05:34.639 05:45:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.639 05:45:56 -- bdev/nbd_common.sh@65 -- # count=2 00:05:34.639 05:45:56 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:34.639 05:45:56 -- bdev/nbd_common.sh@95 -- # count=2 00:05:34.639 05:45:56 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:34.639 05:45:56 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:34.639 05:45:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.639 05:45:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.639 05:45:56 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:34.639 05:45:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.639 05:45:56 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:34.639 05:45:56 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:34.639 256+0 records in 00:05:34.639 256+0 records out 00:05:34.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00767286 s, 137 MB/s 00:05:34.639 05:45:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.639 05:45:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:34.639 256+0 records in 00:05:34.639 256+0 records out 00:05:34.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244649 s, 42.9 MB/s 00:05:34.639 05:45:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.639 05:45:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:34.898 256+0 records in 00:05:34.898 256+0 records out 00:05:34.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243353 s, 43.1 MB/s 00:05:34.898 05:45:56 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:34.898 05:45:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.898 05:45:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.898 05:45:56 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:34.898 05:45:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.898 05:45:56 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:34.898 05:45:56 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:34.898 05:45:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.898 05:45:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:34.898 05:45:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.898 05:45:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:34.898 05:45:56 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.898 05:45:56 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:34.898 05:45:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.898 05:45:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.898 05:45:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:34.898 05:45:56 -- bdev/nbd_common.sh@51 -- # local i 00:05:34.898 05:45:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.898 05:45:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:35.157 05:45:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:35.157 05:45:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:35.157 05:45:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:35.157 05:45:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.157 05:45:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.157 05:45:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:35.157 05:45:56 -- bdev/nbd_common.sh@41 -- # break 00:05:35.157 05:45:56 -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.157 05:45:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.157 05:45:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:35.416 05:45:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:35.416 05:45:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:35.416 05:45:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:35.416 05:45:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.416 05:45:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.416 05:45:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:35.416 05:45:56 -- bdev/nbd_common.sh@41 -- # break 00:05:35.416 05:45:56 -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.416 05:45:56 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.416 05:45:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.416 05:45:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.675 05:45:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:35.675 05:45:57 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:35.675 05:45:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.675 05:45:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:35.675 05:45:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.675 05:45:57 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:35.675 05:45:57 -- bdev/nbd_common.sh@65 -- # true 00:05:35.675 05:45:57 -- bdev/nbd_common.sh@65 -- # count=0 00:05:35.675 05:45:57 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:35.675 05:45:57 -- bdev/nbd_common.sh@104 -- # count=0 00:05:35.675 05:45:57 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:35.675 05:45:57 -- bdev/nbd_common.sh@109 -- # return 0 00:05:35.675 05:45:57 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:35.934 05:45:57 -- event/event.sh@35 -- # sleep 3 00:05:35.934 [2024-12-15 05:45:57.511646] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.934 [2024-12-15 05:45:57.544248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.934 [2024-12-15 05:45:57.544259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.194 [2024-12-15 05:45:57.573835] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:36.194 [2024-12-15 05:45:57.573924] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:39.481 spdk_app_start Round 1 00:05:39.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:39.481 05:46:00 -- event/event.sh@23 -- # for i in {0..2} 00:05:39.481 05:46:00 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:39.481 05:46:00 -- event/event.sh@25 -- # waitforlisten 66711 /var/tmp/spdk-nbd.sock 00:05:39.481 05:46:00 -- common/autotest_common.sh@829 -- # '[' -z 66711 ']' 00:05:39.481 05:46:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:39.481 05:46:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.481 05:46:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:39.481 05:46:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.481 05:46:00 -- common/autotest_common.sh@10 -- # set +x 00:05:39.481 05:46:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.481 05:46:00 -- common/autotest_common.sh@862 -- # return 0 00:05:39.481 05:46:00 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.481 Malloc0 00:05:39.481 05:46:00 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.739 Malloc1 00:05:39.739 05:46:01 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.739 05:46:01 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.739 05:46:01 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.739 05:46:01 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:39.739 05:46:01 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.740 05:46:01 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:39.740 05:46:01 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.740 05:46:01 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.740 05:46:01 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.740 05:46:01 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:39.740 05:46:01 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.740 05:46:01 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:39.740 05:46:01 -- bdev/nbd_common.sh@12 -- # local i 00:05:39.740 05:46:01 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:39.740 05:46:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.740 05:46:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:39.740 /dev/nbd0 00:05:39.998 05:46:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:39.998 05:46:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:39.998 05:46:01 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:39.998 05:46:01 -- common/autotest_common.sh@867 -- # local i 00:05:39.998 05:46:01 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:39.998 05:46:01 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:39.998 05:46:01 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:39.998 05:46:01 -- common/autotest_common.sh@871 -- # break 00:05:39.998 05:46:01 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:39.998 05:46:01 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:39.999 05:46:01 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.999 1+0 records in 00:05:39.999 1+0 records out 00:05:39.999 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238707 s, 17.2 MB/s 00:05:39.999 05:46:01 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.999 05:46:01 -- common/autotest_common.sh@884 -- # size=4096 00:05:39.999 05:46:01 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.999 05:46:01 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:39.999 05:46:01 -- common/autotest_common.sh@887 -- # return 0 00:05:39.999 05:46:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.999 05:46:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.999 05:46:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:40.257 /dev/nbd1 00:05:40.257 05:46:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.257 05:46:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.257 05:46:01 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:40.257 05:46:01 -- common/autotest_common.sh@867 -- # local i 00:05:40.257 05:46:01 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:40.257 05:46:01 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:40.257 05:46:01 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:40.257 05:46:01 -- common/autotest_common.sh@871 -- # break 00:05:40.257 05:46:01 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:40.257 05:46:01 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:40.257 05:46:01 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.257 1+0 records in 00:05:40.257 1+0 records out 00:05:40.257 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240947 s, 17.0 MB/s 00:05:40.257 05:46:01 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.257 05:46:01 -- common/autotest_common.sh@884 -- # size=4096 00:05:40.257 05:46:01 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.257 05:46:01 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:40.257 05:46:01 -- common/autotest_common.sh@887 -- # return 0 00:05:40.257 05:46:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.257 05:46:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.257 05:46:01 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.257 05:46:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.257 05:46:01 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.516 05:46:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:40.516 { 00:05:40.516 "nbd_device": "/dev/nbd0", 00:05:40.516 "bdev_name": "Malloc0" 00:05:40.516 }, 00:05:40.516 { 00:05:40.516 "nbd_device": "/dev/nbd1", 00:05:40.516 "bdev_name": "Malloc1" 00:05:40.516 } 00:05:40.516 ]' 00:05:40.516 05:46:01 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:40.516 { 00:05:40.516 "nbd_device": "/dev/nbd0", 00:05:40.516 "bdev_name": "Malloc0" 00:05:40.516 }, 00:05:40.516 { 00:05:40.516 "nbd_device": "/dev/nbd1", 00:05:40.516 "bdev_name": "Malloc1" 00:05:40.516 } 00:05:40.516 ]' 00:05:40.516 05:46:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.516 05:46:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:40.516 /dev/nbd1' 00:05:40.516 05:46:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.516 05:46:01 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:40.516 /dev/nbd1' 00:05:40.516 05:46:01 -- bdev/nbd_common.sh@65 -- # count=2 00:05:40.516 05:46:01 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:40.516 05:46:01 -- bdev/nbd_common.sh@95 -- # count=2 00:05:40.516 05:46:01 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:40.516 05:46:01 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:40.516 05:46:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.516 05:46:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.516 05:46:01 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:40.516 05:46:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.516 05:46:01 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:40.516 05:46:01 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:40.516 256+0 records in 00:05:40.516 256+0 records out 00:05:40.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00977896 s, 107 MB/s 00:05:40.516 05:46:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.516 05:46:02 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:40.516 256+0 records in 00:05:40.516 256+0 records out 00:05:40.517 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024567 s, 42.7 MB/s 00:05:40.517 05:46:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.517 05:46:02 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:40.517 256+0 records in 00:05:40.517 256+0 records out 00:05:40.517 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273961 s, 38.3 MB/s 00:05:40.517 05:46:02 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:40.517 05:46:02 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.517 05:46:02 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.517 05:46:02 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:40.517 05:46:02 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.517 05:46:02 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:40.517 05:46:02 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:40.517 05:46:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.517 05:46:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:40.517 05:46:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.517 05:46:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:40.517 05:46:02 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.517 05:46:02 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:40.517 05:46:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.517 05:46:02 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.517 05:46:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:40.517 05:46:02 -- bdev/nbd_common.sh@51 -- # local i 00:05:40.517 05:46:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.517 05:46:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:40.776 05:46:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:40.776 05:46:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:40.776 05:46:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:40.776 05:46:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.776 05:46:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.776 05:46:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:40.776 05:46:02 -- bdev/nbd_common.sh@41 -- # break 00:05:40.776 05:46:02 -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.776 05:46:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.776 05:46:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:41.035 05:46:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:41.035 05:46:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:41.035 05:46:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:41.035 05:46:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.036 05:46:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.036 05:46:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:41.036 05:46:02 -- bdev/nbd_common.sh@41 -- # break 00:05:41.036 05:46:02 -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.036 05:46:02 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.036 05:46:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.036 05:46:02 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.294 05:46:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:41.294 05:46:02 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:41.294 05:46:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.294 05:46:02 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:41.294 05:46:02 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.294 05:46:02 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:41.294 05:46:02 -- bdev/nbd_common.sh@65 -- # true 00:05:41.294 05:46:02 -- bdev/nbd_common.sh@65 -- # count=0 00:05:41.294 05:46:02 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:41.294 05:46:02 -- bdev/nbd_common.sh@104 -- # count=0 00:05:41.294 05:46:02 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:41.294 05:46:02 -- bdev/nbd_common.sh@109 -- # return 0 00:05:41.294 05:46:02 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:41.553 05:46:03 -- event/event.sh@35 -- # sleep 3 00:05:41.813 [2024-12-15 05:46:03.299677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.813 [2024-12-15 05:46:03.333409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.813 [2024-12-15 05:46:03.333418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.813 [2024-12-15 05:46:03.362380] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:41.813 [2024-12-15 05:46:03.362438] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.101 spdk_app_start Round 2 00:05:45.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.101 05:46:06 -- event/event.sh@23 -- # for i in {0..2} 00:05:45.101 05:46:06 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:45.101 05:46:06 -- event/event.sh@25 -- # waitforlisten 66711 /var/tmp/spdk-nbd.sock 00:05:45.101 05:46:06 -- common/autotest_common.sh@829 -- # '[' -z 66711 ']' 00:05:45.101 05:46:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.101 05:46:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.101 05:46:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.101 05:46:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.101 05:46:06 -- common/autotest_common.sh@10 -- # set +x 00:05:45.101 05:46:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.101 05:46:06 -- common/autotest_common.sh@862 -- # return 0 00:05:45.101 05:46:06 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.101 Malloc0 00:05:45.101 05:46:06 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.360 Malloc1 00:05:45.360 05:46:06 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.360 05:46:06 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.360 05:46:06 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.360 05:46:06 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:45.360 05:46:06 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.360 05:46:06 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:45.360 05:46:06 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.360 05:46:06 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.360 05:46:06 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.360 05:46:06 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:45.360 05:46:06 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.360 05:46:06 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:45.360 05:46:06 -- bdev/nbd_common.sh@12 -- # local i 00:05:45.619 05:46:06 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:45.619 05:46:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.619 05:46:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:45.619 /dev/nbd0 00:05:45.619 05:46:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:45.619 05:46:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:45.619 05:46:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:45.619 05:46:07 -- common/autotest_common.sh@867 -- # local i 00:05:45.619 05:46:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:45.619 05:46:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:45.619 05:46:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:45.619 05:46:07 -- common/autotest_common.sh@871 -- # break 00:05:45.619 05:46:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:45.619 05:46:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:45.619 05:46:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.619 1+0 records in 00:05:45.619 1+0 records out 00:05:45.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319317 s, 12.8 MB/s 00:05:45.619 05:46:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.619 05:46:07 -- common/autotest_common.sh@884 -- # size=4096 00:05:45.619 05:46:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.619 05:46:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:45.619 05:46:07 -- common/autotest_common.sh@887 -- # return 0 00:05:45.619 05:46:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.619 05:46:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.619 05:46:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:45.897 /dev/nbd1 00:05:46.156 05:46:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.156 05:46:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.156 05:46:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:46.156 05:46:07 -- common/autotest_common.sh@867 -- # local i 00:05:46.156 05:46:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:46.156 05:46:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:46.156 05:46:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:46.156 05:46:07 -- common/autotest_common.sh@871 -- # break 00:05:46.156 05:46:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:46.156 05:46:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:46.156 05:46:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.156 1+0 records in 00:05:46.156 1+0 records out 00:05:46.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514512 s, 8.0 MB/s 00:05:46.156 05:46:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.156 05:46:07 -- common/autotest_common.sh@884 -- # size=4096 00:05:46.156 05:46:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.156 05:46:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:46.156 05:46:07 -- common/autotest_common.sh@887 -- # return 0 00:05:46.156 05:46:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.156 05:46:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.156 05:46:07 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.156 05:46:07 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.156 05:46:07 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.415 05:46:07 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:46.415 { 00:05:46.415 "nbd_device": "/dev/nbd0", 00:05:46.416 "bdev_name": "Malloc0" 00:05:46.416 }, 00:05:46.416 { 00:05:46.416 "nbd_device": "/dev/nbd1", 00:05:46.416 "bdev_name": "Malloc1" 00:05:46.416 } 00:05:46.416 ]' 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:46.416 { 00:05:46.416 "nbd_device": "/dev/nbd0", 00:05:46.416 "bdev_name": "Malloc0" 00:05:46.416 }, 00:05:46.416 { 00:05:46.416 "nbd_device": "/dev/nbd1", 00:05:46.416 "bdev_name": "Malloc1" 00:05:46.416 } 00:05:46.416 ]' 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:46.416 /dev/nbd1' 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:46.416 /dev/nbd1' 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@65 -- # count=2 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@95 -- # count=2 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:46.416 256+0 records in 00:05:46.416 256+0 records out 00:05:46.416 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00834525 s, 126 MB/s 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:46.416 256+0 records in 00:05:46.416 256+0 records out 00:05:46.416 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235697 s, 44.5 MB/s 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:46.416 256+0 records in 00:05:46.416 256+0 records out 00:05:46.416 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280459 s, 37.4 MB/s 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@51 -- # local i 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.416 05:46:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:46.675 05:46:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:46.675 05:46:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:46.675 05:46:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:46.675 05:46:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.675 05:46:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.675 05:46:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:46.675 05:46:08 -- bdev/nbd_common.sh@41 -- # break 00:05:46.675 05:46:08 -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.675 05:46:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.675 05:46:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:47.242 05:46:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:47.242 05:46:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:47.242 05:46:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:47.242 05:46:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.242 05:46:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.242 05:46:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:47.242 05:46:08 -- bdev/nbd_common.sh@41 -- # break 00:05:47.242 05:46:08 -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.242 05:46:08 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.242 05:46:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.242 05:46:08 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.501 05:46:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:47.501 05:46:08 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:47.501 05:46:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.501 05:46:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:47.502 05:46:08 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:47.502 05:46:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.502 05:46:08 -- bdev/nbd_common.sh@65 -- # true 00:05:47.502 05:46:08 -- bdev/nbd_common.sh@65 -- # count=0 00:05:47.502 05:46:08 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:47.502 05:46:08 -- bdev/nbd_common.sh@104 -- # count=0 00:05:47.502 05:46:08 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:47.502 05:46:08 -- bdev/nbd_common.sh@109 -- # return 0 00:05:47.502 05:46:08 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:47.761 05:46:09 -- event/event.sh@35 -- # sleep 3 00:05:47.761 [2024-12-15 05:46:09.354774] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.761 [2024-12-15 05:46:09.389504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.761 [2024-12-15 05:46:09.389515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.020 [2024-12-15 05:46:09.421094] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:48.020 [2024-12-15 05:46:09.421149] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:51.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.308 05:46:12 -- event/event.sh@38 -- # waitforlisten 66711 /var/tmp/spdk-nbd.sock 00:05:51.308 05:46:12 -- common/autotest_common.sh@829 -- # '[' -z 66711 ']' 00:05:51.308 05:46:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.308 05:46:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.308 05:46:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.308 05:46:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.308 05:46:12 -- common/autotest_common.sh@10 -- # set +x 00:05:51.308 05:46:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.308 05:46:12 -- common/autotest_common.sh@862 -- # return 0 00:05:51.308 05:46:12 -- event/event.sh@39 -- # killprocess 66711 00:05:51.308 05:46:12 -- common/autotest_common.sh@936 -- # '[' -z 66711 ']' 00:05:51.308 05:46:12 -- common/autotest_common.sh@940 -- # kill -0 66711 00:05:51.308 05:46:12 -- common/autotest_common.sh@941 -- # uname 00:05:51.308 05:46:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:51.308 05:46:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66711 00:05:51.308 killing process with pid 66711 00:05:51.308 05:46:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:51.308 05:46:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:51.308 05:46:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66711' 00:05:51.308 05:46:12 -- common/autotest_common.sh@955 -- # kill 66711 00:05:51.308 05:46:12 -- common/autotest_common.sh@960 -- # wait 66711 00:05:51.308 spdk_app_start is called in Round 0. 00:05:51.308 Shutdown signal received, stop current app iteration 00:05:51.308 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:05:51.308 spdk_app_start is called in Round 1. 00:05:51.308 Shutdown signal received, stop current app iteration 00:05:51.308 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:05:51.308 spdk_app_start is called in Round 2. 00:05:51.308 Shutdown signal received, stop current app iteration 00:05:51.308 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:05:51.308 spdk_app_start is called in Round 3. 00:05:51.308 Shutdown signal received, stop current app iteration 00:05:51.308 05:46:12 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:51.308 05:46:12 -- event/event.sh@42 -- # return 0 00:05:51.308 00:05:51.308 real 0m18.268s 00:05:51.308 user 0m41.792s 00:05:51.308 sys 0m2.511s 00:05:51.308 05:46:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:51.308 05:46:12 -- common/autotest_common.sh@10 -- # set +x 00:05:51.308 ************************************ 00:05:51.308 END TEST app_repeat 00:05:51.308 ************************************ 00:05:51.308 05:46:12 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:51.308 05:46:12 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:51.308 05:46:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:51.308 05:46:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.308 05:46:12 -- common/autotest_common.sh@10 -- # set +x 00:05:51.308 ************************************ 00:05:51.308 START TEST cpu_locks 00:05:51.308 ************************************ 00:05:51.308 05:46:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:51.308 * Looking for test storage... 00:05:51.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:51.308 05:46:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:51.308 05:46:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:51.308 05:46:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:51.308 05:46:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:51.308 05:46:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:51.308 05:46:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:51.308 05:46:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:51.308 05:46:12 -- scripts/common.sh@335 -- # IFS=.-: 00:05:51.308 05:46:12 -- scripts/common.sh@335 -- # read -ra ver1 00:05:51.308 05:46:12 -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.308 05:46:12 -- scripts/common.sh@336 -- # read -ra ver2 00:05:51.308 05:46:12 -- scripts/common.sh@337 -- # local 'op=<' 00:05:51.308 05:46:12 -- scripts/common.sh@339 -- # ver1_l=2 00:05:51.308 05:46:12 -- scripts/common.sh@340 -- # ver2_l=1 00:05:51.308 05:46:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:51.308 05:46:12 -- scripts/common.sh@343 -- # case "$op" in 00:05:51.308 05:46:12 -- scripts/common.sh@344 -- # : 1 00:05:51.308 05:46:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:51.308 05:46:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.308 05:46:12 -- scripts/common.sh@364 -- # decimal 1 00:05:51.308 05:46:12 -- scripts/common.sh@352 -- # local d=1 00:05:51.308 05:46:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.308 05:46:12 -- scripts/common.sh@354 -- # echo 1 00:05:51.308 05:46:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:51.308 05:46:12 -- scripts/common.sh@365 -- # decimal 2 00:05:51.308 05:46:12 -- scripts/common.sh@352 -- # local d=2 00:05:51.308 05:46:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.308 05:46:12 -- scripts/common.sh@354 -- # echo 2 00:05:51.308 05:46:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:51.308 05:46:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:51.308 05:46:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:51.308 05:46:12 -- scripts/common.sh@367 -- # return 0 00:05:51.308 05:46:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.308 05:46:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:51.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.308 --rc genhtml_branch_coverage=1 00:05:51.308 --rc genhtml_function_coverage=1 00:05:51.308 --rc genhtml_legend=1 00:05:51.308 --rc geninfo_all_blocks=1 00:05:51.308 --rc geninfo_unexecuted_blocks=1 00:05:51.308 00:05:51.308 ' 00:05:51.308 05:46:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:51.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.308 --rc genhtml_branch_coverage=1 00:05:51.308 --rc genhtml_function_coverage=1 00:05:51.308 --rc genhtml_legend=1 00:05:51.308 --rc geninfo_all_blocks=1 00:05:51.308 --rc geninfo_unexecuted_blocks=1 00:05:51.308 00:05:51.308 ' 00:05:51.308 05:46:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:51.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.308 --rc genhtml_branch_coverage=1 00:05:51.308 --rc genhtml_function_coverage=1 00:05:51.308 --rc genhtml_legend=1 00:05:51.308 --rc geninfo_all_blocks=1 00:05:51.308 --rc geninfo_unexecuted_blocks=1 00:05:51.308 00:05:51.308 ' 00:05:51.308 05:46:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:51.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.308 --rc genhtml_branch_coverage=1 00:05:51.308 --rc genhtml_function_coverage=1 00:05:51.308 --rc genhtml_legend=1 00:05:51.308 --rc geninfo_all_blocks=1 00:05:51.308 --rc geninfo_unexecuted_blocks=1 00:05:51.308 00:05:51.308 ' 00:05:51.308 05:46:12 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:51.308 05:46:12 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:51.308 05:46:12 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:51.308 05:46:12 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:51.308 05:46:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:51.308 05:46:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.308 05:46:12 -- common/autotest_common.sh@10 -- # set +x 00:05:51.308 ************************************ 00:05:51.308 START TEST default_locks 00:05:51.308 ************************************ 00:05:51.308 05:46:12 -- common/autotest_common.sh@1114 -- # default_locks 00:05:51.308 05:46:12 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=67143 00:05:51.308 05:46:12 -- event/cpu_locks.sh@47 -- # waitforlisten 67143 00:05:51.308 05:46:12 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.308 05:46:12 -- common/autotest_common.sh@829 -- # '[' -z 67143 ']' 00:05:51.308 05:46:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.308 05:46:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.308 05:46:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.308 05:46:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.308 05:46:12 -- common/autotest_common.sh@10 -- # set +x 00:05:51.567 [2024-12-15 05:46:12.988485] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:51.567 [2024-12-15 05:46:12.988754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67143 ] 00:05:51.567 [2024-12-15 05:46:13.125112] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.567 [2024-12-15 05:46:13.158741] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:51.567 [2024-12-15 05:46:13.159117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.503 05:46:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.503 05:46:13 -- common/autotest_common.sh@862 -- # return 0 00:05:52.503 05:46:13 -- event/cpu_locks.sh@49 -- # locks_exist 67143 00:05:52.503 05:46:13 -- event/cpu_locks.sh@22 -- # lslocks -p 67143 00:05:52.503 05:46:13 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.763 05:46:14 -- event/cpu_locks.sh@50 -- # killprocess 67143 00:05:52.763 05:46:14 -- common/autotest_common.sh@936 -- # '[' -z 67143 ']' 00:05:52.763 05:46:14 -- common/autotest_common.sh@940 -- # kill -0 67143 00:05:52.763 05:46:14 -- common/autotest_common.sh@941 -- # uname 00:05:52.763 05:46:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:52.763 05:46:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67143 00:05:52.763 05:46:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:52.763 05:46:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:52.763 05:46:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67143' 00:05:52.763 killing process with pid 67143 00:05:52.763 05:46:14 -- common/autotest_common.sh@955 -- # kill 67143 00:05:52.763 05:46:14 -- common/autotest_common.sh@960 -- # wait 67143 00:05:53.022 05:46:14 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 67143 00:05:53.022 05:46:14 -- common/autotest_common.sh@650 -- # local es=0 00:05:53.022 05:46:14 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67143 00:05:53.022 05:46:14 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:53.022 05:46:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.022 05:46:14 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:53.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.022 ERROR: process (pid: 67143) is no longer running 00:05:53.022 05:46:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.022 05:46:14 -- common/autotest_common.sh@653 -- # waitforlisten 67143 00:05:53.022 05:46:14 -- common/autotest_common.sh@829 -- # '[' -z 67143 ']' 00:05:53.023 05:46:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.023 05:46:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.023 05:46:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.023 05:46:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.023 05:46:14 -- common/autotest_common.sh@10 -- # set +x 00:05:53.023 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67143) - No such process 00:05:53.023 05:46:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.023 05:46:14 -- common/autotest_common.sh@862 -- # return 1 00:05:53.023 05:46:14 -- common/autotest_common.sh@653 -- # es=1 00:05:53.023 05:46:14 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:53.023 05:46:14 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:53.023 05:46:14 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:53.023 05:46:14 -- event/cpu_locks.sh@54 -- # no_locks 00:05:53.023 05:46:14 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:53.023 05:46:14 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:53.023 05:46:14 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:53.023 00:05:53.023 real 0m1.577s 00:05:53.023 user 0m1.813s 00:05:53.023 sys 0m0.380s 00:05:53.023 05:46:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:53.023 05:46:14 -- common/autotest_common.sh@10 -- # set +x 00:05:53.023 ************************************ 00:05:53.023 END TEST default_locks 00:05:53.023 ************************************ 00:05:53.023 05:46:14 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:53.023 05:46:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:53.023 05:46:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.023 05:46:14 -- common/autotest_common.sh@10 -- # set +x 00:05:53.023 ************************************ 00:05:53.023 START TEST default_locks_via_rpc 00:05:53.023 ************************************ 00:05:53.023 05:46:14 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:05:53.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.023 05:46:14 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=67190 00:05:53.023 05:46:14 -- event/cpu_locks.sh@63 -- # waitforlisten 67190 00:05:53.023 05:46:14 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.023 05:46:14 -- common/autotest_common.sh@829 -- # '[' -z 67190 ']' 00:05:53.023 05:46:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.023 05:46:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.023 05:46:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.023 05:46:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.023 05:46:14 -- common/autotest_common.sh@10 -- # set +x 00:05:53.023 [2024-12-15 05:46:14.619319] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:53.023 [2024-12-15 05:46:14.619425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67190 ] 00:05:53.282 [2024-12-15 05:46:14.755360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.282 [2024-12-15 05:46:14.785908] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:53.282 [2024-12-15 05:46:14.786068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.218 05:46:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.218 05:46:15 -- common/autotest_common.sh@862 -- # return 0 00:05:54.218 05:46:15 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:54.218 05:46:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.218 05:46:15 -- common/autotest_common.sh@10 -- # set +x 00:05:54.218 05:46:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.218 05:46:15 -- event/cpu_locks.sh@67 -- # no_locks 00:05:54.218 05:46:15 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:54.218 05:46:15 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:54.218 05:46:15 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:54.218 05:46:15 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:54.218 05:46:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.218 05:46:15 -- common/autotest_common.sh@10 -- # set +x 00:05:54.218 05:46:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.218 05:46:15 -- event/cpu_locks.sh@71 -- # locks_exist 67190 00:05:54.218 05:46:15 -- event/cpu_locks.sh@22 -- # lslocks -p 67190 00:05:54.218 05:46:15 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.477 05:46:15 -- event/cpu_locks.sh@73 -- # killprocess 67190 00:05:54.477 05:46:15 -- common/autotest_common.sh@936 -- # '[' -z 67190 ']' 00:05:54.477 05:46:15 -- common/autotest_common.sh@940 -- # kill -0 67190 00:05:54.477 05:46:15 -- common/autotest_common.sh@941 -- # uname 00:05:54.477 05:46:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:54.477 05:46:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67190 00:05:54.477 killing process with pid 67190 00:05:54.477 05:46:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:54.477 05:46:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:54.477 05:46:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67190' 00:05:54.477 05:46:15 -- common/autotest_common.sh@955 -- # kill 67190 00:05:54.477 05:46:15 -- common/autotest_common.sh@960 -- # wait 67190 00:05:54.736 00:05:54.736 real 0m1.656s 00:05:54.736 user 0m1.921s 00:05:54.736 sys 0m0.396s 00:05:54.736 05:46:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:54.736 ************************************ 00:05:54.736 END TEST default_locks_via_rpc 00:05:54.736 ************************************ 00:05:54.736 05:46:16 -- common/autotest_common.sh@10 -- # set +x 00:05:54.736 05:46:16 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:54.736 05:46:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.736 05:46:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.736 05:46:16 -- common/autotest_common.sh@10 -- # set +x 00:05:54.736 ************************************ 00:05:54.736 START TEST non_locking_app_on_locked_coremask 00:05:54.736 ************************************ 00:05:54.736 05:46:16 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:05:54.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.736 05:46:16 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=67235 00:05:54.736 05:46:16 -- event/cpu_locks.sh@81 -- # waitforlisten 67235 /var/tmp/spdk.sock 00:05:54.736 05:46:16 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.736 05:46:16 -- common/autotest_common.sh@829 -- # '[' -z 67235 ']' 00:05:54.736 05:46:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.736 05:46:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.736 05:46:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.736 05:46:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.736 05:46:16 -- common/autotest_common.sh@10 -- # set +x 00:05:54.736 [2024-12-15 05:46:16.330119] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:54.736 [2024-12-15 05:46:16.330217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67235 ] 00:05:54.995 [2024-12-15 05:46:16.460576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.995 [2024-12-15 05:46:16.493913] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:54.995 [2024-12-15 05:46:16.494128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.933 05:46:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.933 05:46:17 -- common/autotest_common.sh@862 -- # return 0 00:05:55.933 05:46:17 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=67251 00:05:55.933 05:46:17 -- event/cpu_locks.sh@85 -- # waitforlisten 67251 /var/tmp/spdk2.sock 00:05:55.933 05:46:17 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:55.933 05:46:17 -- common/autotest_common.sh@829 -- # '[' -z 67251 ']' 00:05:55.933 05:46:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.933 05:46:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.933 05:46:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.933 05:46:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.933 05:46:17 -- common/autotest_common.sh@10 -- # set +x 00:05:55.933 [2024-12-15 05:46:17.339620] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:55.933 [2024-12-15 05:46:17.340135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67251 ] 00:05:55.933 [2024-12-15 05:46:17.481202] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.933 [2024-12-15 05:46:17.481269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.933 [2024-12-15 05:46:17.548339] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.933 [2024-12-15 05:46:17.548505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.870 05:46:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.870 05:46:18 -- common/autotest_common.sh@862 -- # return 0 00:05:56.870 05:46:18 -- event/cpu_locks.sh@87 -- # locks_exist 67235 00:05:56.870 05:46:18 -- event/cpu_locks.sh@22 -- # lslocks -p 67235 00:05:56.870 05:46:18 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.808 05:46:19 -- event/cpu_locks.sh@89 -- # killprocess 67235 00:05:57.808 05:46:19 -- common/autotest_common.sh@936 -- # '[' -z 67235 ']' 00:05:57.808 05:46:19 -- common/autotest_common.sh@940 -- # kill -0 67235 00:05:57.808 05:46:19 -- common/autotest_common.sh@941 -- # uname 00:05:57.808 05:46:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:57.808 05:46:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67235 00:05:57.808 killing process with pid 67235 00:05:57.808 05:46:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:57.808 05:46:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:57.808 05:46:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67235' 00:05:57.808 05:46:19 -- common/autotest_common.sh@955 -- # kill 67235 00:05:57.808 05:46:19 -- common/autotest_common.sh@960 -- # wait 67235 00:05:58.067 05:46:19 -- event/cpu_locks.sh@90 -- # killprocess 67251 00:05:58.067 05:46:19 -- common/autotest_common.sh@936 -- # '[' -z 67251 ']' 00:05:58.067 05:46:19 -- common/autotest_common.sh@940 -- # kill -0 67251 00:05:58.067 05:46:19 -- common/autotest_common.sh@941 -- # uname 00:05:58.067 05:46:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:58.067 05:46:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67251 00:05:58.067 killing process with pid 67251 00:05:58.067 05:46:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:58.067 05:46:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:58.067 05:46:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67251' 00:05:58.067 05:46:19 -- common/autotest_common.sh@955 -- # kill 67251 00:05:58.067 05:46:19 -- common/autotest_common.sh@960 -- # wait 67251 00:05:58.326 ************************************ 00:05:58.326 END TEST non_locking_app_on_locked_coremask 00:05:58.326 ************************************ 00:05:58.326 00:05:58.326 real 0m3.547s 00:05:58.326 user 0m4.213s 00:05:58.326 sys 0m0.866s 00:05:58.327 05:46:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.327 05:46:19 -- common/autotest_common.sh@10 -- # set +x 00:05:58.327 05:46:19 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:58.327 05:46:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.327 05:46:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.327 05:46:19 -- common/autotest_common.sh@10 -- # set +x 00:05:58.327 ************************************ 00:05:58.327 START TEST locking_app_on_unlocked_coremask 00:05:58.327 ************************************ 00:05:58.327 05:46:19 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:05:58.327 05:46:19 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=67313 00:05:58.327 05:46:19 -- event/cpu_locks.sh@99 -- # waitforlisten 67313 /var/tmp/spdk.sock 00:05:58.327 05:46:19 -- common/autotest_common.sh@829 -- # '[' -z 67313 ']' 00:05:58.327 05:46:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.327 05:46:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.327 05:46:19 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:58.327 05:46:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.327 05:46:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.327 05:46:19 -- common/autotest_common.sh@10 -- # set +x 00:05:58.327 [2024-12-15 05:46:19.931455] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:58.327 [2024-12-15 05:46:19.931580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67313 ] 00:05:58.586 [2024-12-15 05:46:20.064080] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.586 [2024-12-15 05:46:20.064153] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.586 [2024-12-15 05:46:20.097708] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:58.586 [2024-12-15 05:46:20.097865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.523 05:46:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.523 05:46:20 -- common/autotest_common.sh@862 -- # return 0 00:05:59.523 05:46:20 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=67329 00:05:59.523 05:46:20 -- event/cpu_locks.sh@103 -- # waitforlisten 67329 /var/tmp/spdk2.sock 00:05:59.523 05:46:20 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:59.523 05:46:20 -- common/autotest_common.sh@829 -- # '[' -z 67329 ']' 00:05:59.523 05:46:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.523 05:46:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.523 05:46:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.523 05:46:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.523 05:46:20 -- common/autotest_common.sh@10 -- # set +x 00:05:59.523 [2024-12-15 05:46:20.953014] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:59.523 [2024-12-15 05:46:20.953317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67329 ] 00:05:59.523 [2024-12-15 05:46:21.095406] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.782 [2024-12-15 05:46:21.164309] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:59.782 [2024-12-15 05:46:21.164509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.350 05:46:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.350 05:46:21 -- common/autotest_common.sh@862 -- # return 0 00:06:00.350 05:46:21 -- event/cpu_locks.sh@105 -- # locks_exist 67329 00:06:00.350 05:46:21 -- event/cpu_locks.sh@22 -- # lslocks -p 67329 00:06:00.350 05:46:21 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.918 05:46:22 -- event/cpu_locks.sh@107 -- # killprocess 67313 00:06:00.918 05:46:22 -- common/autotest_common.sh@936 -- # '[' -z 67313 ']' 00:06:00.918 05:46:22 -- common/autotest_common.sh@940 -- # kill -0 67313 00:06:00.918 05:46:22 -- common/autotest_common.sh@941 -- # uname 00:06:00.918 05:46:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:00.918 05:46:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67313 00:06:01.240 05:46:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:01.240 05:46:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:01.240 05:46:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67313' 00:06:01.240 killing process with pid 67313 00:06:01.240 05:46:22 -- common/autotest_common.sh@955 -- # kill 67313 00:06:01.240 05:46:22 -- common/autotest_common.sh@960 -- # wait 67313 00:06:01.499 05:46:22 -- event/cpu_locks.sh@108 -- # killprocess 67329 00:06:01.499 05:46:22 -- common/autotest_common.sh@936 -- # '[' -z 67329 ']' 00:06:01.499 05:46:22 -- common/autotest_common.sh@940 -- # kill -0 67329 00:06:01.499 05:46:22 -- common/autotest_common.sh@941 -- # uname 00:06:01.499 05:46:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:01.499 05:46:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67329 00:06:01.499 killing process with pid 67329 00:06:01.499 05:46:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:01.499 05:46:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:01.499 05:46:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67329' 00:06:01.499 05:46:23 -- common/autotest_common.sh@955 -- # kill 67329 00:06:01.499 05:46:23 -- common/autotest_common.sh@960 -- # wait 67329 00:06:01.758 ************************************ 00:06:01.758 END TEST locking_app_on_unlocked_coremask 00:06:01.758 ************************************ 00:06:01.758 00:06:01.758 real 0m3.366s 00:06:01.758 user 0m4.024s 00:06:01.758 sys 0m0.788s 00:06:01.758 05:46:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:01.758 05:46:23 -- common/autotest_common.sh@10 -- # set +x 00:06:01.758 05:46:23 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:01.758 05:46:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.758 05:46:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.758 05:46:23 -- common/autotest_common.sh@10 -- # set +x 00:06:01.758 ************************************ 00:06:01.758 START TEST locking_app_on_locked_coremask 00:06:01.758 ************************************ 00:06:01.758 05:46:23 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:01.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.758 05:46:23 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=67390 00:06:01.758 05:46:23 -- event/cpu_locks.sh@116 -- # waitforlisten 67390 /var/tmp/spdk.sock 00:06:01.758 05:46:23 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.758 05:46:23 -- common/autotest_common.sh@829 -- # '[' -z 67390 ']' 00:06:01.758 05:46:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.758 05:46:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.758 05:46:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.758 05:46:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.758 05:46:23 -- common/autotest_common.sh@10 -- # set +x 00:06:01.758 [2024-12-15 05:46:23.344165] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:01.758 [2024-12-15 05:46:23.344254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67390 ] 00:06:02.017 [2024-12-15 05:46:23.475527] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.017 [2024-12-15 05:46:23.508685] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:02.017 [2024-12-15 05:46:23.508873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.954 05:46:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.954 05:46:24 -- common/autotest_common.sh@862 -- # return 0 00:06:02.954 05:46:24 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=67406 00:06:02.954 05:46:24 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 67406 /var/tmp/spdk2.sock 00:06:02.954 05:46:24 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:02.954 05:46:24 -- common/autotest_common.sh@650 -- # local es=0 00:06:02.954 05:46:24 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67406 /var/tmp/spdk2.sock 00:06:02.954 05:46:24 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:02.954 05:46:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.954 05:46:24 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:02.954 05:46:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.954 05:46:24 -- common/autotest_common.sh@653 -- # waitforlisten 67406 /var/tmp/spdk2.sock 00:06:02.954 05:46:24 -- common/autotest_common.sh@829 -- # '[' -z 67406 ']' 00:06:02.954 05:46:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.954 05:46:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.954 05:46:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.954 05:46:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.954 05:46:24 -- common/autotest_common.sh@10 -- # set +x 00:06:02.954 [2024-12-15 05:46:24.373822] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:02.954 [2024-12-15 05:46:24.373952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67406 ] 00:06:02.954 [2024-12-15 05:46:24.515054] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 67390 has claimed it. 00:06:02.954 [2024-12-15 05:46:24.515121] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:03.521 ERROR: process (pid: 67406) is no longer running 00:06:03.521 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67406) - No such process 00:06:03.521 05:46:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.521 05:46:25 -- common/autotest_common.sh@862 -- # return 1 00:06:03.521 05:46:25 -- common/autotest_common.sh@653 -- # es=1 00:06:03.521 05:46:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:03.521 05:46:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:03.521 05:46:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:03.521 05:46:25 -- event/cpu_locks.sh@122 -- # locks_exist 67390 00:06:03.521 05:46:25 -- event/cpu_locks.sh@22 -- # lslocks -p 67390 00:06:03.521 05:46:25 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.088 05:46:25 -- event/cpu_locks.sh@124 -- # killprocess 67390 00:06:04.088 05:46:25 -- common/autotest_common.sh@936 -- # '[' -z 67390 ']' 00:06:04.088 05:46:25 -- common/autotest_common.sh@940 -- # kill -0 67390 00:06:04.088 05:46:25 -- common/autotest_common.sh@941 -- # uname 00:06:04.088 05:46:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:04.088 05:46:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67390 00:06:04.088 killing process with pid 67390 00:06:04.088 05:46:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:04.088 05:46:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:04.088 05:46:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67390' 00:06:04.088 05:46:25 -- common/autotest_common.sh@955 -- # kill 67390 00:06:04.088 05:46:25 -- common/autotest_common.sh@960 -- # wait 67390 00:06:04.347 00:06:04.347 real 0m2.494s 00:06:04.347 user 0m3.043s 00:06:04.347 sys 0m0.499s 00:06:04.347 05:46:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.347 ************************************ 00:06:04.347 END TEST locking_app_on_locked_coremask 00:06:04.347 ************************************ 00:06:04.347 05:46:25 -- common/autotest_common.sh@10 -- # set +x 00:06:04.347 05:46:25 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:04.347 05:46:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.347 05:46:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.347 05:46:25 -- common/autotest_common.sh@10 -- # set +x 00:06:04.347 ************************************ 00:06:04.347 START TEST locking_overlapped_coremask 00:06:04.347 ************************************ 00:06:04.347 05:46:25 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:04.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.347 05:46:25 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=67452 00:06:04.347 05:46:25 -- event/cpu_locks.sh@133 -- # waitforlisten 67452 /var/tmp/spdk.sock 00:06:04.347 05:46:25 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:04.347 05:46:25 -- common/autotest_common.sh@829 -- # '[' -z 67452 ']' 00:06:04.347 05:46:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.347 05:46:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.347 05:46:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.347 05:46:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.347 05:46:25 -- common/autotest_common.sh@10 -- # set +x 00:06:04.347 [2024-12-15 05:46:25.906272] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:04.347 [2024-12-15 05:46:25.906372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67452 ] 00:06:04.607 [2024-12-15 05:46:26.038402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.607 [2024-12-15 05:46:26.070771] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:04.607 [2024-12-15 05:46:26.071049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.607 [2024-12-15 05:46:26.071395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.607 [2024-12-15 05:46:26.071404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.543 05:46:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.543 05:46:26 -- common/autotest_common.sh@862 -- # return 0 00:06:05.543 05:46:26 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:05.543 05:46:26 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=67470 00:06:05.543 05:46:26 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 67470 /var/tmp/spdk2.sock 00:06:05.543 05:46:26 -- common/autotest_common.sh@650 -- # local es=0 00:06:05.543 05:46:26 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67470 /var/tmp/spdk2.sock 00:06:05.543 05:46:26 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:05.543 05:46:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.543 05:46:26 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:05.543 05:46:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.543 05:46:26 -- common/autotest_common.sh@653 -- # waitforlisten 67470 /var/tmp/spdk2.sock 00:06:05.544 05:46:26 -- common/autotest_common.sh@829 -- # '[' -z 67470 ']' 00:06:05.544 05:46:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.544 05:46:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.544 05:46:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.544 05:46:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.544 05:46:26 -- common/autotest_common.sh@10 -- # set +x 00:06:05.544 [2024-12-15 05:46:26.881164] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:05.544 [2024-12-15 05:46:26.881408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67470 ] 00:06:05.544 [2024-12-15 05:46:27.016661] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67452 has claimed it. 00:06:05.544 [2024-12-15 05:46:27.019955] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:06.110 ERROR: process (pid: 67470) is no longer running 00:06:06.110 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67470) - No such process 00:06:06.110 05:46:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.110 05:46:27 -- common/autotest_common.sh@862 -- # return 1 00:06:06.110 05:46:27 -- common/autotest_common.sh@653 -- # es=1 00:06:06.110 05:46:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:06.110 05:46:27 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:06.110 05:46:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:06.110 05:46:27 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:06.110 05:46:27 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:06.110 05:46:27 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:06.110 05:46:27 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:06.110 05:46:27 -- event/cpu_locks.sh@141 -- # killprocess 67452 00:06:06.110 05:46:27 -- common/autotest_common.sh@936 -- # '[' -z 67452 ']' 00:06:06.110 05:46:27 -- common/autotest_common.sh@940 -- # kill -0 67452 00:06:06.110 05:46:27 -- common/autotest_common.sh@941 -- # uname 00:06:06.110 05:46:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:06.110 05:46:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67452 00:06:06.111 05:46:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:06.111 05:46:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:06.111 05:46:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67452' 00:06:06.111 killing process with pid 67452 00:06:06.111 05:46:27 -- common/autotest_common.sh@955 -- # kill 67452 00:06:06.111 05:46:27 -- common/autotest_common.sh@960 -- # wait 67452 00:06:06.369 00:06:06.369 real 0m2.024s 00:06:06.369 user 0m5.926s 00:06:06.369 sys 0m0.280s 00:06:06.369 05:46:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:06.369 05:46:27 -- common/autotest_common.sh@10 -- # set +x 00:06:06.369 ************************************ 00:06:06.369 END TEST locking_overlapped_coremask 00:06:06.369 ************************************ 00:06:06.369 05:46:27 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:06.369 05:46:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:06.369 05:46:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.369 05:46:27 -- common/autotest_common.sh@10 -- # set +x 00:06:06.369 ************************************ 00:06:06.369 START TEST locking_overlapped_coremask_via_rpc 00:06:06.369 ************************************ 00:06:06.369 05:46:27 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:06.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.369 05:46:27 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=67510 00:06:06.369 05:46:27 -- event/cpu_locks.sh@149 -- # waitforlisten 67510 /var/tmp/spdk.sock 00:06:06.369 05:46:27 -- common/autotest_common.sh@829 -- # '[' -z 67510 ']' 00:06:06.369 05:46:27 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:06.369 05:46:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.369 05:46:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.369 05:46:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.369 05:46:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.369 05:46:27 -- common/autotest_common.sh@10 -- # set +x 00:06:06.369 [2024-12-15 05:46:27.972357] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:06.369 [2024-12-15 05:46:27.972429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67510 ] 00:06:06.628 [2024-12-15 05:46:28.104828] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:06.628 [2024-12-15 05:46:28.104865] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:06.628 [2024-12-15 05:46:28.138365] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:06.628 [2024-12-15 05:46:28.138928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.628 [2024-12-15 05:46:28.138962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.628 [2024-12-15 05:46:28.138964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.564 05:46:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.564 05:46:28 -- common/autotest_common.sh@862 -- # return 0 00:06:07.564 05:46:28 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=67528 00:06:07.564 05:46:28 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:07.565 05:46:28 -- event/cpu_locks.sh@153 -- # waitforlisten 67528 /var/tmp/spdk2.sock 00:06:07.565 05:46:28 -- common/autotest_common.sh@829 -- # '[' -z 67528 ']' 00:06:07.565 05:46:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.565 05:46:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.565 05:46:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.565 05:46:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.565 05:46:28 -- common/autotest_common.sh@10 -- # set +x 00:06:07.565 [2024-12-15 05:46:28.962500] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:07.565 [2024-12-15 05:46:28.962606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67528 ] 00:06:07.565 [2024-12-15 05:46:29.106002] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.565 [2024-12-15 05:46:29.106053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.565 [2024-12-15 05:46:29.166414] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:07.565 [2024-12-15 05:46:29.166670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.565 [2024-12-15 05:46:29.170027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:07.565 [2024-12-15 05:46:29.170029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.501 05:46:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.501 05:46:29 -- common/autotest_common.sh@862 -- # return 0 00:06:08.501 05:46:29 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:08.501 05:46:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.501 05:46:29 -- common/autotest_common.sh@10 -- # set +x 00:06:08.501 05:46:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.501 05:46:29 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:08.501 05:46:29 -- common/autotest_common.sh@650 -- # local es=0 00:06:08.501 05:46:29 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:08.501 05:46:29 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:08.501 05:46:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.501 05:46:29 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:08.501 05:46:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.501 05:46:29 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:08.501 05:46:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.501 05:46:29 -- common/autotest_common.sh@10 -- # set +x 00:06:08.501 [2024-12-15 05:46:29.936000] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67510 has claimed it. 00:06:08.501 request: 00:06:08.501 { 00:06:08.501 "method": "framework_enable_cpumask_locks", 00:06:08.501 "req_id": 1 00:06:08.501 } 00:06:08.501 Got JSON-RPC error response 00:06:08.501 response: 00:06:08.501 { 00:06:08.501 "code": -32603, 00:06:08.501 "message": "Failed to claim CPU core: 2" 00:06:08.501 } 00:06:08.501 05:46:29 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:08.501 05:46:29 -- common/autotest_common.sh@653 -- # es=1 00:06:08.501 05:46:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:08.501 05:46:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:08.501 05:46:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:08.501 05:46:29 -- event/cpu_locks.sh@158 -- # waitforlisten 67510 /var/tmp/spdk.sock 00:06:08.501 05:46:29 -- common/autotest_common.sh@829 -- # '[' -z 67510 ']' 00:06:08.501 05:46:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.501 05:46:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.501 05:46:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.501 05:46:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.501 05:46:29 -- common/autotest_common.sh@10 -- # set +x 00:06:08.760 05:46:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.760 05:46:30 -- common/autotest_common.sh@862 -- # return 0 00:06:08.760 05:46:30 -- event/cpu_locks.sh@159 -- # waitforlisten 67528 /var/tmp/spdk2.sock 00:06:08.760 05:46:30 -- common/autotest_common.sh@829 -- # '[' -z 67528 ']' 00:06:08.760 05:46:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.760 05:46:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.760 05:46:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.760 05:46:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.760 05:46:30 -- common/autotest_common.sh@10 -- # set +x 00:06:09.019 ************************************ 00:06:09.019 END TEST locking_overlapped_coremask_via_rpc 00:06:09.019 ************************************ 00:06:09.019 05:46:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.019 05:46:30 -- common/autotest_common.sh@862 -- # return 0 00:06:09.019 05:46:30 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:09.019 05:46:30 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:09.019 05:46:30 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:09.019 05:46:30 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:09.019 00:06:09.019 real 0m2.568s 00:06:09.019 user 0m1.302s 00:06:09.019 sys 0m0.177s 00:06:09.019 05:46:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:09.019 05:46:30 -- common/autotest_common.sh@10 -- # set +x 00:06:09.019 05:46:30 -- event/cpu_locks.sh@174 -- # cleanup 00:06:09.019 05:46:30 -- event/cpu_locks.sh@15 -- # [[ -z 67510 ]] 00:06:09.019 05:46:30 -- event/cpu_locks.sh@15 -- # killprocess 67510 00:06:09.019 05:46:30 -- common/autotest_common.sh@936 -- # '[' -z 67510 ']' 00:06:09.019 05:46:30 -- common/autotest_common.sh@940 -- # kill -0 67510 00:06:09.019 05:46:30 -- common/autotest_common.sh@941 -- # uname 00:06:09.019 05:46:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:09.019 05:46:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67510 00:06:09.019 killing process with pid 67510 00:06:09.019 05:46:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:09.019 05:46:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:09.019 05:46:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67510' 00:06:09.019 05:46:30 -- common/autotest_common.sh@955 -- # kill 67510 00:06:09.019 05:46:30 -- common/autotest_common.sh@960 -- # wait 67510 00:06:09.278 05:46:30 -- event/cpu_locks.sh@16 -- # [[ -z 67528 ]] 00:06:09.278 05:46:30 -- event/cpu_locks.sh@16 -- # killprocess 67528 00:06:09.278 05:46:30 -- common/autotest_common.sh@936 -- # '[' -z 67528 ']' 00:06:09.278 05:46:30 -- common/autotest_common.sh@940 -- # kill -0 67528 00:06:09.278 05:46:30 -- common/autotest_common.sh@941 -- # uname 00:06:09.278 05:46:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:09.278 05:46:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67528 00:06:09.278 killing process with pid 67528 00:06:09.279 05:46:30 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:09.279 05:46:30 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:09.279 05:46:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67528' 00:06:09.279 05:46:30 -- common/autotest_common.sh@955 -- # kill 67528 00:06:09.279 05:46:30 -- common/autotest_common.sh@960 -- # wait 67528 00:06:09.537 05:46:31 -- event/cpu_locks.sh@18 -- # rm -f 00:06:09.537 05:46:31 -- event/cpu_locks.sh@1 -- # cleanup 00:06:09.537 05:46:31 -- event/cpu_locks.sh@15 -- # [[ -z 67510 ]] 00:06:09.537 05:46:31 -- event/cpu_locks.sh@15 -- # killprocess 67510 00:06:09.537 05:46:31 -- common/autotest_common.sh@936 -- # '[' -z 67510 ']' 00:06:09.537 05:46:31 -- common/autotest_common.sh@940 -- # kill -0 67510 00:06:09.537 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (67510) - No such process 00:06:09.537 05:46:31 -- common/autotest_common.sh@963 -- # echo 'Process with pid 67510 is not found' 00:06:09.537 Process with pid 67510 is not found 00:06:09.537 05:46:31 -- event/cpu_locks.sh@16 -- # [[ -z 67528 ]] 00:06:09.537 Process with pid 67528 is not found 00:06:09.537 05:46:31 -- event/cpu_locks.sh@16 -- # killprocess 67528 00:06:09.537 05:46:31 -- common/autotest_common.sh@936 -- # '[' -z 67528 ']' 00:06:09.537 05:46:31 -- common/autotest_common.sh@940 -- # kill -0 67528 00:06:09.537 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (67528) - No such process 00:06:09.537 05:46:31 -- common/autotest_common.sh@963 -- # echo 'Process with pid 67528 is not found' 00:06:09.537 05:46:31 -- event/cpu_locks.sh@18 -- # rm -f 00:06:09.537 ************************************ 00:06:09.537 END TEST cpu_locks 00:06:09.537 ************************************ 00:06:09.537 00:06:09.537 real 0m18.316s 00:06:09.537 user 0m34.084s 00:06:09.537 sys 0m4.016s 00:06:09.537 05:46:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:09.537 05:46:31 -- common/autotest_common.sh@10 -- # set +x 00:06:09.537 ************************************ 00:06:09.537 END TEST event 00:06:09.537 ************************************ 00:06:09.537 00:06:09.537 real 0m43.736s 00:06:09.538 user 1m26.100s 00:06:09.538 sys 0m7.206s 00:06:09.538 05:46:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:09.538 05:46:31 -- common/autotest_common.sh@10 -- # set +x 00:06:09.538 05:46:31 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:09.538 05:46:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:09.538 05:46:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.538 05:46:31 -- common/autotest_common.sh@10 -- # set +x 00:06:09.538 ************************************ 00:06:09.538 START TEST thread 00:06:09.538 ************************************ 00:06:09.538 05:46:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:09.797 * Looking for test storage... 00:06:09.797 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:09.797 05:46:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:09.797 05:46:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:09.797 05:46:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:09.797 05:46:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:09.797 05:46:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:09.797 05:46:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:09.797 05:46:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:09.797 05:46:31 -- scripts/common.sh@335 -- # IFS=.-: 00:06:09.797 05:46:31 -- scripts/common.sh@335 -- # read -ra ver1 00:06:09.797 05:46:31 -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.797 05:46:31 -- scripts/common.sh@336 -- # read -ra ver2 00:06:09.797 05:46:31 -- scripts/common.sh@337 -- # local 'op=<' 00:06:09.797 05:46:31 -- scripts/common.sh@339 -- # ver1_l=2 00:06:09.797 05:46:31 -- scripts/common.sh@340 -- # ver2_l=1 00:06:09.797 05:46:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:09.797 05:46:31 -- scripts/common.sh@343 -- # case "$op" in 00:06:09.797 05:46:31 -- scripts/common.sh@344 -- # : 1 00:06:09.797 05:46:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:09.797 05:46:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.797 05:46:31 -- scripts/common.sh@364 -- # decimal 1 00:06:09.797 05:46:31 -- scripts/common.sh@352 -- # local d=1 00:06:09.797 05:46:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.797 05:46:31 -- scripts/common.sh@354 -- # echo 1 00:06:09.797 05:46:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:09.797 05:46:31 -- scripts/common.sh@365 -- # decimal 2 00:06:09.797 05:46:31 -- scripts/common.sh@352 -- # local d=2 00:06:09.797 05:46:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.797 05:46:31 -- scripts/common.sh@354 -- # echo 2 00:06:09.797 05:46:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:09.797 05:46:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:09.797 05:46:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:09.797 05:46:31 -- scripts/common.sh@367 -- # return 0 00:06:09.797 05:46:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.797 05:46:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:09.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.797 --rc genhtml_branch_coverage=1 00:06:09.797 --rc genhtml_function_coverage=1 00:06:09.797 --rc genhtml_legend=1 00:06:09.797 --rc geninfo_all_blocks=1 00:06:09.797 --rc geninfo_unexecuted_blocks=1 00:06:09.797 00:06:09.797 ' 00:06:09.797 05:46:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:09.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.797 --rc genhtml_branch_coverage=1 00:06:09.797 --rc genhtml_function_coverage=1 00:06:09.797 --rc genhtml_legend=1 00:06:09.797 --rc geninfo_all_blocks=1 00:06:09.797 --rc geninfo_unexecuted_blocks=1 00:06:09.797 00:06:09.797 ' 00:06:09.797 05:46:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:09.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.797 --rc genhtml_branch_coverage=1 00:06:09.797 --rc genhtml_function_coverage=1 00:06:09.797 --rc genhtml_legend=1 00:06:09.797 --rc geninfo_all_blocks=1 00:06:09.797 --rc geninfo_unexecuted_blocks=1 00:06:09.797 00:06:09.797 ' 00:06:09.797 05:46:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:09.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.797 --rc genhtml_branch_coverage=1 00:06:09.797 --rc genhtml_function_coverage=1 00:06:09.797 --rc genhtml_legend=1 00:06:09.797 --rc geninfo_all_blocks=1 00:06:09.797 --rc geninfo_unexecuted_blocks=1 00:06:09.797 00:06:09.797 ' 00:06:09.797 05:46:31 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:09.797 05:46:31 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:09.797 05:46:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.797 05:46:31 -- common/autotest_common.sh@10 -- # set +x 00:06:09.797 ************************************ 00:06:09.797 START TEST thread_poller_perf 00:06:09.797 ************************************ 00:06:09.797 05:46:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:09.797 [2024-12-15 05:46:31.333282] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:09.797 [2024-12-15 05:46:31.333571] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67652 ] 00:06:10.057 [2024-12-15 05:46:31.471010] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.057 [2024-12-15 05:46:31.501274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.057 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:10.994 [2024-12-15T05:46:32.635Z] ====================================== 00:06:10.994 [2024-12-15T05:46:32.635Z] busy:2209000284 (cyc) 00:06:10.994 [2024-12-15T05:46:32.635Z] total_run_count: 356000 00:06:10.994 [2024-12-15T05:46:32.635Z] tsc_hz: 2200000000 (cyc) 00:06:10.994 [2024-12-15T05:46:32.635Z] ====================================== 00:06:10.994 [2024-12-15T05:46:32.635Z] poller_cost: 6205 (cyc), 2820 (nsec) 00:06:10.994 ************************************ 00:06:10.994 END TEST thread_poller_perf 00:06:10.994 ************************************ 00:06:10.994 00:06:10.994 real 0m1.237s 00:06:10.994 user 0m1.089s 00:06:10.994 sys 0m0.038s 00:06:10.994 05:46:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:10.994 05:46:32 -- common/autotest_common.sh@10 -- # set +x 00:06:10.994 05:46:32 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:10.994 05:46:32 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:10.994 05:46:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.994 05:46:32 -- common/autotest_common.sh@10 -- # set +x 00:06:10.994 ************************************ 00:06:10.994 START TEST thread_poller_perf 00:06:10.994 ************************************ 00:06:10.994 05:46:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:10.994 [2024-12-15 05:46:32.620751] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:10.994 [2024-12-15 05:46:32.620830] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67687 ] 00:06:11.253 [2024-12-15 05:46:32.749770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.253 [2024-12-15 05:46:32.782003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.253 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:12.189 [2024-12-15T05:46:33.830Z] ====================================== 00:06:12.189 [2024-12-15T05:46:33.830Z] busy:2202637096 (cyc) 00:06:12.189 [2024-12-15T05:46:33.830Z] total_run_count: 4917000 00:06:12.189 [2024-12-15T05:46:33.830Z] tsc_hz: 2200000000 (cyc) 00:06:12.189 [2024-12-15T05:46:33.830Z] ====================================== 00:06:12.189 [2024-12-15T05:46:33.830Z] poller_cost: 447 (cyc), 203 (nsec) 00:06:12.448 ************************************ 00:06:12.448 END TEST thread_poller_perf 00:06:12.448 ************************************ 00:06:12.448 00:06:12.448 real 0m1.223s 00:06:12.448 user 0m1.081s 00:06:12.448 sys 0m0.036s 00:06:12.448 05:46:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:12.448 05:46:33 -- common/autotest_common.sh@10 -- # set +x 00:06:12.448 05:46:33 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:12.448 ************************************ 00:06:12.448 END TEST thread 00:06:12.448 ************************************ 00:06:12.448 00:06:12.448 real 0m2.733s 00:06:12.448 user 0m2.292s 00:06:12.448 sys 0m0.218s 00:06:12.448 05:46:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:12.448 05:46:33 -- common/autotest_common.sh@10 -- # set +x 00:06:12.448 05:46:33 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:12.448 05:46:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:12.448 05:46:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.448 05:46:33 -- common/autotest_common.sh@10 -- # set +x 00:06:12.448 ************************************ 00:06:12.448 START TEST accel 00:06:12.448 ************************************ 00:06:12.448 05:46:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:12.448 * Looking for test storage... 00:06:12.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:12.448 05:46:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:12.448 05:46:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:12.448 05:46:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:12.708 05:46:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:12.708 05:46:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:12.708 05:46:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:12.708 05:46:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:12.708 05:46:34 -- scripts/common.sh@335 -- # IFS=.-: 00:06:12.708 05:46:34 -- scripts/common.sh@335 -- # read -ra ver1 00:06:12.708 05:46:34 -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.708 05:46:34 -- scripts/common.sh@336 -- # read -ra ver2 00:06:12.708 05:46:34 -- scripts/common.sh@337 -- # local 'op=<' 00:06:12.708 05:46:34 -- scripts/common.sh@339 -- # ver1_l=2 00:06:12.708 05:46:34 -- scripts/common.sh@340 -- # ver2_l=1 00:06:12.708 05:46:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:12.708 05:46:34 -- scripts/common.sh@343 -- # case "$op" in 00:06:12.708 05:46:34 -- scripts/common.sh@344 -- # : 1 00:06:12.708 05:46:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:12.708 05:46:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.708 05:46:34 -- scripts/common.sh@364 -- # decimal 1 00:06:12.708 05:46:34 -- scripts/common.sh@352 -- # local d=1 00:06:12.708 05:46:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.708 05:46:34 -- scripts/common.sh@354 -- # echo 1 00:06:12.708 05:46:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:12.708 05:46:34 -- scripts/common.sh@365 -- # decimal 2 00:06:12.708 05:46:34 -- scripts/common.sh@352 -- # local d=2 00:06:12.708 05:46:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.708 05:46:34 -- scripts/common.sh@354 -- # echo 2 00:06:12.708 05:46:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:12.708 05:46:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:12.708 05:46:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:12.708 05:46:34 -- scripts/common.sh@367 -- # return 0 00:06:12.708 05:46:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.708 05:46:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:12.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.708 --rc genhtml_branch_coverage=1 00:06:12.708 --rc genhtml_function_coverage=1 00:06:12.708 --rc genhtml_legend=1 00:06:12.708 --rc geninfo_all_blocks=1 00:06:12.708 --rc geninfo_unexecuted_blocks=1 00:06:12.708 00:06:12.708 ' 00:06:12.708 05:46:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:12.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.708 --rc genhtml_branch_coverage=1 00:06:12.708 --rc genhtml_function_coverage=1 00:06:12.708 --rc genhtml_legend=1 00:06:12.708 --rc geninfo_all_blocks=1 00:06:12.708 --rc geninfo_unexecuted_blocks=1 00:06:12.708 00:06:12.708 ' 00:06:12.708 05:46:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:12.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.708 --rc genhtml_branch_coverage=1 00:06:12.708 --rc genhtml_function_coverage=1 00:06:12.708 --rc genhtml_legend=1 00:06:12.708 --rc geninfo_all_blocks=1 00:06:12.708 --rc geninfo_unexecuted_blocks=1 00:06:12.708 00:06:12.708 ' 00:06:12.708 05:46:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:12.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.708 --rc genhtml_branch_coverage=1 00:06:12.708 --rc genhtml_function_coverage=1 00:06:12.708 --rc genhtml_legend=1 00:06:12.708 --rc geninfo_all_blocks=1 00:06:12.708 --rc geninfo_unexecuted_blocks=1 00:06:12.708 00:06:12.708 ' 00:06:12.708 05:46:34 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:12.708 05:46:34 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:12.708 05:46:34 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:12.708 05:46:34 -- accel/accel.sh@59 -- # spdk_tgt_pid=67769 00:06:12.708 05:46:34 -- accel/accel.sh@60 -- # waitforlisten 67769 00:06:12.708 05:46:34 -- common/autotest_common.sh@829 -- # '[' -z 67769 ']' 00:06:12.708 05:46:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.708 05:46:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.708 05:46:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.708 05:46:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.708 05:46:34 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:12.708 05:46:34 -- common/autotest_common.sh@10 -- # set +x 00:06:12.708 05:46:34 -- accel/accel.sh@58 -- # build_accel_config 00:06:12.708 05:46:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.708 05:46:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.708 05:46:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.708 05:46:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.708 05:46:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.708 05:46:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.708 05:46:34 -- accel/accel.sh@42 -- # jq -r . 00:06:12.708 [2024-12-15 05:46:34.177296] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:12.708 [2024-12-15 05:46:34.177579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67769 ] 00:06:12.708 [2024-12-15 05:46:34.317141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.967 [2024-12-15 05:46:34.356137] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:12.967 [2024-12-15 05:46:34.356599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.535 05:46:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.535 05:46:35 -- common/autotest_common.sh@862 -- # return 0 00:06:13.535 05:46:35 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:13.793 05:46:35 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:13.793 05:46:35 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:13.793 05:46:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.793 05:46:35 -- common/autotest_common.sh@10 -- # set +x 00:06:13.793 05:46:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.793 05:46:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.793 05:46:35 -- accel/accel.sh@64 -- # IFS== 00:06:13.793 05:46:35 -- accel/accel.sh@64 -- # read -r opc module 00:06:13.793 05:46:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:13.793 05:46:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.793 05:46:35 -- accel/accel.sh@64 -- # IFS== 00:06:13.793 05:46:35 -- accel/accel.sh@64 -- # read -r opc module 00:06:13.793 05:46:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:13.793 05:46:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.793 05:46:35 -- accel/accel.sh@64 -- # IFS== 00:06:13.793 05:46:35 -- accel/accel.sh@64 -- # read -r opc module 00:06:13.793 05:46:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:13.793 05:46:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.793 05:46:35 -- accel/accel.sh@64 -- # IFS== 00:06:13.793 05:46:35 -- accel/accel.sh@64 -- # read -r opc module 00:06:13.793 05:46:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:13.793 05:46:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.793 05:46:35 -- accel/accel.sh@64 -- # IFS== 00:06:13.793 05:46:35 -- accel/accel.sh@64 -- # read -r opc module 00:06:13.793 05:46:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:13.793 05:46:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.793 05:46:35 -- accel/accel.sh@64 -- # IFS== 00:06:13.793 05:46:35 -- accel/accel.sh@64 -- # read -r opc module 00:06:13.793 05:46:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:13.793 05:46:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.793 05:46:35 -- accel/accel.sh@64 -- # IFS== 00:06:13.793 05:46:35 -- accel/accel.sh@64 -- # read -r opc module 00:06:13.793 05:46:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:13.793 05:46:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.793 05:46:35 -- accel/accel.sh@64 -- # IFS== 00:06:13.793 05:46:35 -- accel/accel.sh@64 -- # read -r opc module 00:06:13.793 05:46:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:13.793 05:46:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.793 05:46:35 -- accel/accel.sh@64 -- # IFS== 00:06:13.793 05:46:35 -- accel/accel.sh@64 -- # read -r opc module 00:06:13.793 05:46:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:13.793 05:46:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.793 05:46:35 -- accel/accel.sh@64 -- # IFS== 00:06:13.793 05:46:35 -- accel/accel.sh@64 -- # read -r opc module 00:06:13.793 05:46:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:13.793 05:46:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.793 05:46:35 -- accel/accel.sh@64 -- # IFS== 00:06:13.793 05:46:35 -- accel/accel.sh@64 -- # read -r opc module 00:06:13.793 05:46:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:13.793 05:46:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.793 05:46:35 -- accel/accel.sh@64 -- # IFS== 00:06:13.793 05:46:35 -- accel/accel.sh@64 -- # read -r opc module 00:06:13.794 05:46:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:13.794 05:46:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.794 05:46:35 -- accel/accel.sh@64 -- # IFS== 00:06:13.794 05:46:35 -- accel/accel.sh@64 -- # read -r opc module 00:06:13.794 05:46:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:13.794 05:46:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.794 05:46:35 -- accel/accel.sh@64 -- # IFS== 00:06:13.794 05:46:35 -- accel/accel.sh@64 -- # read -r opc module 00:06:13.794 05:46:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:13.794 05:46:35 -- accel/accel.sh@67 -- # killprocess 67769 00:06:13.794 05:46:35 -- common/autotest_common.sh@936 -- # '[' -z 67769 ']' 00:06:13.794 05:46:35 -- common/autotest_common.sh@940 -- # kill -0 67769 00:06:13.794 05:46:35 -- common/autotest_common.sh@941 -- # uname 00:06:13.794 05:46:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:13.794 05:46:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67769 00:06:13.794 05:46:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:13.794 05:46:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:13.794 05:46:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67769' 00:06:13.794 killing process with pid 67769 00:06:13.794 05:46:35 -- common/autotest_common.sh@955 -- # kill 67769 00:06:13.794 05:46:35 -- common/autotest_common.sh@960 -- # wait 67769 00:06:14.052 05:46:35 -- accel/accel.sh@68 -- # trap - ERR 00:06:14.052 05:46:35 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:14.052 05:46:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:14.052 05:46:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.052 05:46:35 -- common/autotest_common.sh@10 -- # set +x 00:06:14.052 05:46:35 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:14.052 05:46:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:14.052 05:46:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.052 05:46:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.052 05:46:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.052 05:46:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.052 05:46:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.052 05:46:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.052 05:46:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.053 05:46:35 -- accel/accel.sh@42 -- # jq -r . 00:06:14.053 05:46:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.053 05:46:35 -- common/autotest_common.sh@10 -- # set +x 00:06:14.053 05:46:35 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:14.053 05:46:35 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:14.053 05:46:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.053 05:46:35 -- common/autotest_common.sh@10 -- # set +x 00:06:14.053 ************************************ 00:06:14.053 START TEST accel_missing_filename 00:06:14.053 ************************************ 00:06:14.053 05:46:35 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:14.053 05:46:35 -- common/autotest_common.sh@650 -- # local es=0 00:06:14.053 05:46:35 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:14.053 05:46:35 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:14.053 05:46:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.053 05:46:35 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:14.053 05:46:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.053 05:46:35 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:14.053 05:46:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:14.053 05:46:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.053 05:46:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.053 05:46:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.053 05:46:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.053 05:46:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.053 05:46:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.053 05:46:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.053 05:46:35 -- accel/accel.sh@42 -- # jq -r . 00:06:14.053 [2024-12-15 05:46:35.581458] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:14.053 [2024-12-15 05:46:35.581538] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67815 ] 00:06:14.312 [2024-12-15 05:46:35.714949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.312 [2024-12-15 05:46:35.744687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.312 [2024-12-15 05:46:35.771925] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:14.312 [2024-12-15 05:46:35.815468] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:14.312 A filename is required. 00:06:14.312 05:46:35 -- common/autotest_common.sh@653 -- # es=234 00:06:14.312 05:46:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:14.312 05:46:35 -- common/autotest_common.sh@662 -- # es=106 00:06:14.312 05:46:35 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:14.312 05:46:35 -- common/autotest_common.sh@670 -- # es=1 00:06:14.312 05:46:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:14.312 00:06:14.312 real 0m0.304s 00:06:14.312 user 0m0.187s 00:06:14.312 sys 0m0.066s 00:06:14.312 05:46:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.312 05:46:35 -- common/autotest_common.sh@10 -- # set +x 00:06:14.312 ************************************ 00:06:14.312 END TEST accel_missing_filename 00:06:14.312 ************************************ 00:06:14.312 05:46:35 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:14.312 05:46:35 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:14.312 05:46:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.312 05:46:35 -- common/autotest_common.sh@10 -- # set +x 00:06:14.312 ************************************ 00:06:14.312 START TEST accel_compress_verify 00:06:14.312 ************************************ 00:06:14.312 05:46:35 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:14.312 05:46:35 -- common/autotest_common.sh@650 -- # local es=0 00:06:14.312 05:46:35 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:14.312 05:46:35 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:14.312 05:46:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.312 05:46:35 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:14.312 05:46:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.312 05:46:35 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:14.312 05:46:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.312 05:46:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:14.312 05:46:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.312 05:46:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.312 05:46:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.312 05:46:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.312 05:46:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.312 05:46:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.312 05:46:35 -- accel/accel.sh@42 -- # jq -r . 00:06:14.312 [2024-12-15 05:46:35.940132] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:14.312 [2024-12-15 05:46:35.940359] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67845 ] 00:06:14.572 [2024-12-15 05:46:36.078653] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.572 [2024-12-15 05:46:36.118321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.572 [2024-12-15 05:46:36.152419] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:14.572 [2024-12-15 05:46:36.196990] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:14.831 00:06:14.831 Compression does not support the verify option, aborting. 00:06:14.831 ************************************ 00:06:14.831 END TEST accel_compress_verify 00:06:14.831 ************************************ 00:06:14.831 05:46:36 -- common/autotest_common.sh@653 -- # es=161 00:06:14.831 05:46:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:14.831 05:46:36 -- common/autotest_common.sh@662 -- # es=33 00:06:14.831 05:46:36 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:14.831 05:46:36 -- common/autotest_common.sh@670 -- # es=1 00:06:14.831 05:46:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:14.831 00:06:14.831 real 0m0.336s 00:06:14.831 user 0m0.206s 00:06:14.831 sys 0m0.077s 00:06:14.831 05:46:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.831 05:46:36 -- common/autotest_common.sh@10 -- # set +x 00:06:14.831 05:46:36 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:14.831 05:46:36 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:14.831 05:46:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.831 05:46:36 -- common/autotest_common.sh@10 -- # set +x 00:06:14.831 ************************************ 00:06:14.831 START TEST accel_wrong_workload 00:06:14.831 ************************************ 00:06:14.831 05:46:36 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:14.831 05:46:36 -- common/autotest_common.sh@650 -- # local es=0 00:06:14.831 05:46:36 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:14.831 05:46:36 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:14.831 05:46:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.831 05:46:36 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:14.831 05:46:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.831 05:46:36 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:14.831 05:46:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:14.831 05:46:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.831 05:46:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.831 05:46:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.831 05:46:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.831 05:46:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.831 05:46:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.831 05:46:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.831 05:46:36 -- accel/accel.sh@42 -- # jq -r . 00:06:14.831 Unsupported workload type: foobar 00:06:14.831 [2024-12-15 05:46:36.324792] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:14.831 accel_perf options: 00:06:14.831 [-h help message] 00:06:14.831 [-q queue depth per core] 00:06:14.831 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:14.831 [-T number of threads per core 00:06:14.831 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:14.831 [-t time in seconds] 00:06:14.831 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:14.831 [ dif_verify, , dif_generate, dif_generate_copy 00:06:14.831 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:14.831 [-l for compress/decompress workloads, name of uncompressed input file 00:06:14.831 [-S for crc32c workload, use this seed value (default 0) 00:06:14.831 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:14.831 [-f for fill workload, use this BYTE value (default 255) 00:06:14.831 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:14.831 [-y verify result if this switch is on] 00:06:14.831 [-a tasks to allocate per core (default: same value as -q)] 00:06:14.831 Can be used to spread operations across a wider range of memory. 00:06:14.831 05:46:36 -- common/autotest_common.sh@653 -- # es=1 00:06:14.831 05:46:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:14.831 05:46:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:14.831 05:46:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:14.831 00:06:14.831 real 0m0.028s 00:06:14.831 user 0m0.017s 00:06:14.831 sys 0m0.011s 00:06:14.831 05:46:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.831 05:46:36 -- common/autotest_common.sh@10 -- # set +x 00:06:14.831 ************************************ 00:06:14.831 END TEST accel_wrong_workload 00:06:14.831 ************************************ 00:06:14.831 05:46:36 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:14.831 05:46:36 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:14.831 05:46:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.831 05:46:36 -- common/autotest_common.sh@10 -- # set +x 00:06:14.831 ************************************ 00:06:14.831 START TEST accel_negative_buffers 00:06:14.831 ************************************ 00:06:14.831 05:46:36 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:14.831 05:46:36 -- common/autotest_common.sh@650 -- # local es=0 00:06:14.831 05:46:36 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:14.831 05:46:36 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:14.831 05:46:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.831 05:46:36 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:14.831 05:46:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.831 05:46:36 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:14.831 05:46:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:14.831 05:46:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.831 05:46:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.831 05:46:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.831 05:46:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.831 05:46:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.831 05:46:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.831 05:46:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.831 05:46:36 -- accel/accel.sh@42 -- # jq -r . 00:06:14.831 -x option must be non-negative. 00:06:14.831 [2024-12-15 05:46:36.402399] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:14.831 accel_perf options: 00:06:14.831 [-h help message] 00:06:14.831 [-q queue depth per core] 00:06:14.831 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:14.831 [-T number of threads per core 00:06:14.831 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:14.831 [-t time in seconds] 00:06:14.832 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:14.832 [ dif_verify, , dif_generate, dif_generate_copy 00:06:14.832 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:14.832 [-l for compress/decompress workloads, name of uncompressed input file 00:06:14.832 [-S for crc32c workload, use this seed value (default 0) 00:06:14.832 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:14.832 [-f for fill workload, use this BYTE value (default 255) 00:06:14.832 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:14.832 [-y verify result if this switch is on] 00:06:14.832 [-a tasks to allocate per core (default: same value as -q)] 00:06:14.832 Can be used to spread operations across a wider range of memory. 00:06:14.832 05:46:36 -- common/autotest_common.sh@653 -- # es=1 00:06:14.832 05:46:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:14.832 05:46:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:14.832 05:46:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:14.832 00:06:14.832 real 0m0.027s 00:06:14.832 user 0m0.017s 00:06:14.832 sys 0m0.010s 00:06:14.832 05:46:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.832 ************************************ 00:06:14.832 END TEST accel_negative_buffers 00:06:14.832 ************************************ 00:06:14.832 05:46:36 -- common/autotest_common.sh@10 -- # set +x 00:06:14.832 05:46:36 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:14.832 05:46:36 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:14.832 05:46:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.832 05:46:36 -- common/autotest_common.sh@10 -- # set +x 00:06:14.832 ************************************ 00:06:14.832 START TEST accel_crc32c 00:06:14.832 ************************************ 00:06:14.832 05:46:36 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:14.832 05:46:36 -- accel/accel.sh@16 -- # local accel_opc 00:06:14.832 05:46:36 -- accel/accel.sh@17 -- # local accel_module 00:06:14.832 05:46:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:14.832 05:46:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:14.832 05:46:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.832 05:46:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.832 05:46:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.832 05:46:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.832 05:46:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.832 05:46:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.832 05:46:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.832 05:46:36 -- accel/accel.sh@42 -- # jq -r . 00:06:15.091 [2024-12-15 05:46:36.478524] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:15.091 [2024-12-15 05:46:36.478613] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67898 ] 00:06:15.091 [2024-12-15 05:46:36.617339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.091 [2024-12-15 05:46:36.656255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.468 05:46:37 -- accel/accel.sh@18 -- # out=' 00:06:16.468 SPDK Configuration: 00:06:16.468 Core mask: 0x1 00:06:16.468 00:06:16.468 Accel Perf Configuration: 00:06:16.468 Workload Type: crc32c 00:06:16.468 CRC-32C seed: 32 00:06:16.468 Transfer size: 4096 bytes 00:06:16.468 Vector count 1 00:06:16.468 Module: software 00:06:16.468 Queue depth: 32 00:06:16.468 Allocate depth: 32 00:06:16.468 # threads/core: 1 00:06:16.468 Run time: 1 seconds 00:06:16.468 Verify: Yes 00:06:16.468 00:06:16.468 Running for 1 seconds... 00:06:16.468 00:06:16.468 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:16.468 ------------------------------------------------------------------------------------ 00:06:16.468 0,0 502880/s 1964 MiB/s 0 0 00:06:16.468 ==================================================================================== 00:06:16.468 Total 502880/s 1964 MiB/s 0 0' 00:06:16.468 05:46:37 -- accel/accel.sh@20 -- # IFS=: 00:06:16.468 05:46:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:16.468 05:46:37 -- accel/accel.sh@20 -- # read -r var val 00:06:16.468 05:46:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:16.468 05:46:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.468 05:46:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.468 05:46:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.468 05:46:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.468 05:46:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.468 05:46:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.468 05:46:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.468 05:46:37 -- accel/accel.sh@42 -- # jq -r . 00:06:16.468 [2024-12-15 05:46:37.802298] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:16.468 [2024-12-15 05:46:37.802405] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67916 ] 00:06:16.468 [2024-12-15 05:46:37.935392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.468 [2024-12-15 05:46:37.965617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.468 05:46:37 -- accel/accel.sh@21 -- # val= 00:06:16.468 05:46:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.468 05:46:37 -- accel/accel.sh@20 -- # IFS=: 00:06:16.468 05:46:37 -- accel/accel.sh@20 -- # read -r var val 00:06:16.468 05:46:37 -- accel/accel.sh@21 -- # val= 00:06:16.468 05:46:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.468 05:46:37 -- accel/accel.sh@20 -- # IFS=: 00:06:16.468 05:46:37 -- accel/accel.sh@20 -- # read -r var val 00:06:16.468 05:46:37 -- accel/accel.sh@21 -- # val=0x1 00:06:16.468 05:46:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.468 05:46:37 -- accel/accel.sh@20 -- # IFS=: 00:06:16.468 05:46:37 -- accel/accel.sh@20 -- # read -r var val 00:06:16.468 05:46:37 -- accel/accel.sh@21 -- # val= 00:06:16.468 05:46:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.468 05:46:37 -- accel/accel.sh@20 -- # IFS=: 00:06:16.468 05:46:37 -- accel/accel.sh@20 -- # read -r var val 00:06:16.468 05:46:37 -- accel/accel.sh@21 -- # val= 00:06:16.468 05:46:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.468 05:46:37 -- accel/accel.sh@20 -- # IFS=: 00:06:16.468 05:46:37 -- accel/accel.sh@20 -- # read -r var val 00:06:16.468 05:46:37 -- accel/accel.sh@21 -- # val=crc32c 00:06:16.468 05:46:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.468 05:46:37 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:16.468 05:46:37 -- accel/accel.sh@20 -- # IFS=: 00:06:16.468 05:46:37 -- accel/accel.sh@20 -- # read -r var val 00:06:16.468 05:46:37 -- accel/accel.sh@21 -- # val=32 00:06:16.468 05:46:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.468 05:46:37 -- accel/accel.sh@20 -- # IFS=: 00:06:16.469 05:46:37 -- accel/accel.sh@20 -- # read -r var val 00:06:16.469 05:46:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:16.469 05:46:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.469 05:46:37 -- accel/accel.sh@20 -- # IFS=: 00:06:16.469 05:46:37 -- accel/accel.sh@20 -- # read -r var val 00:06:16.469 05:46:37 -- accel/accel.sh@21 -- # val= 00:06:16.469 05:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.469 05:46:38 -- accel/accel.sh@20 -- # IFS=: 00:06:16.469 05:46:38 -- accel/accel.sh@20 -- # read -r var val 00:06:16.469 05:46:38 -- accel/accel.sh@21 -- # val=software 00:06:16.469 05:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.469 05:46:38 -- accel/accel.sh@23 -- # accel_module=software 00:06:16.469 05:46:38 -- accel/accel.sh@20 -- # IFS=: 00:06:16.469 05:46:38 -- accel/accel.sh@20 -- # read -r var val 00:06:16.469 05:46:38 -- accel/accel.sh@21 -- # val=32 00:06:16.469 05:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.469 05:46:38 -- accel/accel.sh@20 -- # IFS=: 00:06:16.469 05:46:38 -- accel/accel.sh@20 -- # read -r var val 00:06:16.469 05:46:38 -- accel/accel.sh@21 -- # val=32 00:06:16.469 05:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.469 05:46:38 -- accel/accel.sh@20 -- # IFS=: 00:06:16.469 05:46:38 -- accel/accel.sh@20 -- # read -r var val 00:06:16.469 05:46:38 -- accel/accel.sh@21 -- # val=1 00:06:16.469 05:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.469 05:46:38 -- accel/accel.sh@20 -- # IFS=: 00:06:16.469 05:46:38 -- accel/accel.sh@20 -- # read -r var val 00:06:16.469 05:46:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:16.469 05:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.469 05:46:38 -- accel/accel.sh@20 -- # IFS=: 00:06:16.469 05:46:38 -- accel/accel.sh@20 -- # read -r var val 00:06:16.469 05:46:38 -- accel/accel.sh@21 -- # val=Yes 00:06:16.469 05:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.469 05:46:38 -- accel/accel.sh@20 -- # IFS=: 00:06:16.469 05:46:38 -- accel/accel.sh@20 -- # read -r var val 00:06:16.469 05:46:38 -- accel/accel.sh@21 -- # val= 00:06:16.469 05:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.469 05:46:38 -- accel/accel.sh@20 -- # IFS=: 00:06:16.469 05:46:38 -- accel/accel.sh@20 -- # read -r var val 00:06:16.469 05:46:38 -- accel/accel.sh@21 -- # val= 00:06:16.469 05:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.469 05:46:38 -- accel/accel.sh@20 -- # IFS=: 00:06:16.469 05:46:38 -- accel/accel.sh@20 -- # read -r var val 00:06:17.866 05:46:39 -- accel/accel.sh@21 -- # val= 00:06:17.866 05:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.866 05:46:39 -- accel/accel.sh@20 -- # IFS=: 00:06:17.866 05:46:39 -- accel/accel.sh@20 -- # read -r var val 00:06:17.866 05:46:39 -- accel/accel.sh@21 -- # val= 00:06:17.866 05:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.866 05:46:39 -- accel/accel.sh@20 -- # IFS=: 00:06:17.866 05:46:39 -- accel/accel.sh@20 -- # read -r var val 00:06:17.866 05:46:39 -- accel/accel.sh@21 -- # val= 00:06:17.866 05:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.866 05:46:39 -- accel/accel.sh@20 -- # IFS=: 00:06:17.866 05:46:39 -- accel/accel.sh@20 -- # read -r var val 00:06:17.866 05:46:39 -- accel/accel.sh@21 -- # val= 00:06:17.866 05:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.866 05:46:39 -- accel/accel.sh@20 -- # IFS=: 00:06:17.866 05:46:39 -- accel/accel.sh@20 -- # read -r var val 00:06:17.866 05:46:39 -- accel/accel.sh@21 -- # val= 00:06:17.866 05:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.866 05:46:39 -- accel/accel.sh@20 -- # IFS=: 00:06:17.866 05:46:39 -- accel/accel.sh@20 -- # read -r var val 00:06:17.866 05:46:39 -- accel/accel.sh@21 -- # val= 00:06:17.866 05:46:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.866 05:46:39 -- accel/accel.sh@20 -- # IFS=: 00:06:17.866 05:46:39 -- accel/accel.sh@20 -- # read -r var val 00:06:17.866 05:46:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:17.866 05:46:39 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:17.866 05:46:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.866 00:06:17.866 real 0m2.628s 00:06:17.866 user 0m2.281s 00:06:17.866 sys 0m0.149s 00:06:17.866 05:46:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:17.866 05:46:39 -- common/autotest_common.sh@10 -- # set +x 00:06:17.866 ************************************ 00:06:17.866 END TEST accel_crc32c 00:06:17.866 ************************************ 00:06:17.866 05:46:39 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:17.866 05:46:39 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:17.866 05:46:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.866 05:46:39 -- common/autotest_common.sh@10 -- # set +x 00:06:17.866 ************************************ 00:06:17.866 START TEST accel_crc32c_C2 00:06:17.866 ************************************ 00:06:17.866 05:46:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:17.866 05:46:39 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.866 05:46:39 -- accel/accel.sh@17 -- # local accel_module 00:06:17.866 05:46:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:17.866 05:46:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:17.866 05:46:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.866 05:46:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.866 05:46:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.866 05:46:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.866 05:46:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.866 05:46:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.866 05:46:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.866 05:46:39 -- accel/accel.sh@42 -- # jq -r . 00:06:17.866 [2024-12-15 05:46:39.161545] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:17.866 [2024-12-15 05:46:39.161645] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67952 ] 00:06:17.866 [2024-12-15 05:46:39.296570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.866 [2024-12-15 05:46:39.327599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.851 05:46:40 -- accel/accel.sh@18 -- # out=' 00:06:18.851 SPDK Configuration: 00:06:18.851 Core mask: 0x1 00:06:18.851 00:06:18.851 Accel Perf Configuration: 00:06:18.851 Workload Type: crc32c 00:06:18.851 CRC-32C seed: 0 00:06:18.851 Transfer size: 4096 bytes 00:06:18.851 Vector count 2 00:06:18.851 Module: software 00:06:18.851 Queue depth: 32 00:06:18.851 Allocate depth: 32 00:06:18.851 # threads/core: 1 00:06:18.851 Run time: 1 seconds 00:06:18.851 Verify: Yes 00:06:18.851 00:06:18.851 Running for 1 seconds... 00:06:18.851 00:06:18.851 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:18.851 ------------------------------------------------------------------------------------ 00:06:18.851 0,0 393312/s 3072 MiB/s 0 0 00:06:18.851 ==================================================================================== 00:06:18.851 Total 393312/s 1536 MiB/s 0 0' 00:06:18.851 05:46:40 -- accel/accel.sh@20 -- # IFS=: 00:06:18.851 05:46:40 -- accel/accel.sh@20 -- # read -r var val 00:06:18.851 05:46:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:18.851 05:46:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:18.851 05:46:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.851 05:46:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.851 05:46:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.851 05:46:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.851 05:46:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.851 05:46:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.851 05:46:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.851 05:46:40 -- accel/accel.sh@42 -- # jq -r . 00:06:18.851 [2024-12-15 05:46:40.463862] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:18.851 [2024-12-15 05:46:40.463975] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67966 ] 00:06:19.110 [2024-12-15 05:46:40.594990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.110 [2024-12-15 05:46:40.625441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.110 05:46:40 -- accel/accel.sh@21 -- # val= 00:06:19.110 05:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.110 05:46:40 -- accel/accel.sh@20 -- # IFS=: 00:06:19.110 05:46:40 -- accel/accel.sh@20 -- # read -r var val 00:06:19.110 05:46:40 -- accel/accel.sh@21 -- # val= 00:06:19.110 05:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.110 05:46:40 -- accel/accel.sh@20 -- # IFS=: 00:06:19.110 05:46:40 -- accel/accel.sh@20 -- # read -r var val 00:06:19.110 05:46:40 -- accel/accel.sh@21 -- # val=0x1 00:06:19.110 05:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.110 05:46:40 -- accel/accel.sh@20 -- # IFS=: 00:06:19.110 05:46:40 -- accel/accel.sh@20 -- # read -r var val 00:06:19.110 05:46:40 -- accel/accel.sh@21 -- # val= 00:06:19.110 05:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.110 05:46:40 -- accel/accel.sh@20 -- # IFS=: 00:06:19.110 05:46:40 -- accel/accel.sh@20 -- # read -r var val 00:06:19.111 05:46:40 -- accel/accel.sh@21 -- # val= 00:06:19.111 05:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.111 05:46:40 -- accel/accel.sh@20 -- # IFS=: 00:06:19.111 05:46:40 -- accel/accel.sh@20 -- # read -r var val 00:06:19.111 05:46:40 -- accel/accel.sh@21 -- # val=crc32c 00:06:19.111 05:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.111 05:46:40 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:19.111 05:46:40 -- accel/accel.sh@20 -- # IFS=: 00:06:19.111 05:46:40 -- accel/accel.sh@20 -- # read -r var val 00:06:19.111 05:46:40 -- accel/accel.sh@21 -- # val=0 00:06:19.111 05:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.111 05:46:40 -- accel/accel.sh@20 -- # IFS=: 00:06:19.111 05:46:40 -- accel/accel.sh@20 -- # read -r var val 00:06:19.111 05:46:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:19.111 05:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.111 05:46:40 -- accel/accel.sh@20 -- # IFS=: 00:06:19.111 05:46:40 -- accel/accel.sh@20 -- # read -r var val 00:06:19.111 05:46:40 -- accel/accel.sh@21 -- # val= 00:06:19.111 05:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.111 05:46:40 -- accel/accel.sh@20 -- # IFS=: 00:06:19.111 05:46:40 -- accel/accel.sh@20 -- # read -r var val 00:06:19.111 05:46:40 -- accel/accel.sh@21 -- # val=software 00:06:19.111 05:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.111 05:46:40 -- accel/accel.sh@23 -- # accel_module=software 00:06:19.111 05:46:40 -- accel/accel.sh@20 -- # IFS=: 00:06:19.111 05:46:40 -- accel/accel.sh@20 -- # read -r var val 00:06:19.111 05:46:40 -- accel/accel.sh@21 -- # val=32 00:06:19.111 05:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.111 05:46:40 -- accel/accel.sh@20 -- # IFS=: 00:06:19.111 05:46:40 -- accel/accel.sh@20 -- # read -r var val 00:06:19.111 05:46:40 -- accel/accel.sh@21 -- # val=32 00:06:19.111 05:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.111 05:46:40 -- accel/accel.sh@20 -- # IFS=: 00:06:19.111 05:46:40 -- accel/accel.sh@20 -- # read -r var val 00:06:19.111 05:46:40 -- accel/accel.sh@21 -- # val=1 00:06:19.111 05:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.111 05:46:40 -- accel/accel.sh@20 -- # IFS=: 00:06:19.111 05:46:40 -- accel/accel.sh@20 -- # read -r var val 00:06:19.111 05:46:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:19.111 05:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.111 05:46:40 -- accel/accel.sh@20 -- # IFS=: 00:06:19.111 05:46:40 -- accel/accel.sh@20 -- # read -r var val 00:06:19.111 05:46:40 -- accel/accel.sh@21 -- # val=Yes 00:06:19.111 05:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.111 05:46:40 -- accel/accel.sh@20 -- # IFS=: 00:06:19.111 05:46:40 -- accel/accel.sh@20 -- # read -r var val 00:06:19.111 05:46:40 -- accel/accel.sh@21 -- # val= 00:06:19.111 05:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.111 05:46:40 -- accel/accel.sh@20 -- # IFS=: 00:06:19.111 05:46:40 -- accel/accel.sh@20 -- # read -r var val 00:06:19.111 05:46:40 -- accel/accel.sh@21 -- # val= 00:06:19.111 05:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.111 05:46:40 -- accel/accel.sh@20 -- # IFS=: 00:06:19.111 05:46:40 -- accel/accel.sh@20 -- # read -r var val 00:06:20.488 05:46:41 -- accel/accel.sh@21 -- # val= 00:06:20.488 05:46:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.488 05:46:41 -- accel/accel.sh@20 -- # IFS=: 00:06:20.488 05:46:41 -- accel/accel.sh@20 -- # read -r var val 00:06:20.488 05:46:41 -- accel/accel.sh@21 -- # val= 00:06:20.488 05:46:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.488 05:46:41 -- accel/accel.sh@20 -- # IFS=: 00:06:20.489 05:46:41 -- accel/accel.sh@20 -- # read -r var val 00:06:20.489 05:46:41 -- accel/accel.sh@21 -- # val= 00:06:20.489 05:46:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.489 05:46:41 -- accel/accel.sh@20 -- # IFS=: 00:06:20.489 05:46:41 -- accel/accel.sh@20 -- # read -r var val 00:06:20.489 05:46:41 -- accel/accel.sh@21 -- # val= 00:06:20.489 05:46:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.489 05:46:41 -- accel/accel.sh@20 -- # IFS=: 00:06:20.489 05:46:41 -- accel/accel.sh@20 -- # read -r var val 00:06:20.489 05:46:41 -- accel/accel.sh@21 -- # val= 00:06:20.489 05:46:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.489 05:46:41 -- accel/accel.sh@20 -- # IFS=: 00:06:20.489 05:46:41 -- accel/accel.sh@20 -- # read -r var val 00:06:20.489 05:46:41 -- accel/accel.sh@21 -- # val= 00:06:20.489 05:46:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.489 05:46:41 -- accel/accel.sh@20 -- # IFS=: 00:06:20.489 05:46:41 -- accel/accel.sh@20 -- # read -r var val 00:06:20.489 05:46:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:20.489 05:46:41 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:20.489 05:46:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.489 00:06:20.489 real 0m2.603s 00:06:20.489 user 0m2.268s 00:06:20.489 sys 0m0.136s 00:06:20.489 05:46:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.489 05:46:41 -- common/autotest_common.sh@10 -- # set +x 00:06:20.489 ************************************ 00:06:20.489 END TEST accel_crc32c_C2 00:06:20.489 ************************************ 00:06:20.489 05:46:41 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:20.489 05:46:41 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:20.489 05:46:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.489 05:46:41 -- common/autotest_common.sh@10 -- # set +x 00:06:20.489 ************************************ 00:06:20.489 START TEST accel_copy 00:06:20.489 ************************************ 00:06:20.489 05:46:41 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:20.489 05:46:41 -- accel/accel.sh@16 -- # local accel_opc 00:06:20.489 05:46:41 -- accel/accel.sh@17 -- # local accel_module 00:06:20.489 05:46:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:20.489 05:46:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:20.489 05:46:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.489 05:46:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.489 05:46:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.489 05:46:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.489 05:46:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.489 05:46:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.489 05:46:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.489 05:46:41 -- accel/accel.sh@42 -- # jq -r . 00:06:20.489 [2024-12-15 05:46:41.821101] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:20.489 [2024-12-15 05:46:41.821201] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67995 ] 00:06:20.489 [2024-12-15 05:46:41.955719] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.489 [2024-12-15 05:46:41.989988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.867 05:46:43 -- accel/accel.sh@18 -- # out=' 00:06:21.867 SPDK Configuration: 00:06:21.867 Core mask: 0x1 00:06:21.867 00:06:21.867 Accel Perf Configuration: 00:06:21.867 Workload Type: copy 00:06:21.868 Transfer size: 4096 bytes 00:06:21.868 Vector count 1 00:06:21.868 Module: software 00:06:21.868 Queue depth: 32 00:06:21.868 Allocate depth: 32 00:06:21.868 # threads/core: 1 00:06:21.868 Run time: 1 seconds 00:06:21.868 Verify: Yes 00:06:21.868 00:06:21.868 Running for 1 seconds... 00:06:21.868 00:06:21.868 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:21.868 ------------------------------------------------------------------------------------ 00:06:21.868 0,0 362656/s 1416 MiB/s 0 0 00:06:21.868 ==================================================================================== 00:06:21.868 Total 362656/s 1416 MiB/s 0 0' 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # IFS=: 00:06:21.868 05:46:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # read -r var val 00:06:21.868 05:46:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.868 05:46:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:21.868 05:46:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.868 05:46:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.868 05:46:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.868 05:46:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.868 05:46:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.868 05:46:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.868 05:46:43 -- accel/accel.sh@42 -- # jq -r . 00:06:21.868 [2024-12-15 05:46:43.135534] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:21.868 [2024-12-15 05:46:43.135621] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68022 ] 00:06:21.868 [2024-12-15 05:46:43.262219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.868 [2024-12-15 05:46:43.292044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.868 05:46:43 -- accel/accel.sh@21 -- # val= 00:06:21.868 05:46:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # IFS=: 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # read -r var val 00:06:21.868 05:46:43 -- accel/accel.sh@21 -- # val= 00:06:21.868 05:46:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # IFS=: 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # read -r var val 00:06:21.868 05:46:43 -- accel/accel.sh@21 -- # val=0x1 00:06:21.868 05:46:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # IFS=: 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # read -r var val 00:06:21.868 05:46:43 -- accel/accel.sh@21 -- # val= 00:06:21.868 05:46:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # IFS=: 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # read -r var val 00:06:21.868 05:46:43 -- accel/accel.sh@21 -- # val= 00:06:21.868 05:46:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # IFS=: 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # read -r var val 00:06:21.868 05:46:43 -- accel/accel.sh@21 -- # val=copy 00:06:21.868 05:46:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.868 05:46:43 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # IFS=: 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # read -r var val 00:06:21.868 05:46:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:21.868 05:46:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # IFS=: 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # read -r var val 00:06:21.868 05:46:43 -- accel/accel.sh@21 -- # val= 00:06:21.868 05:46:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # IFS=: 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # read -r var val 00:06:21.868 05:46:43 -- accel/accel.sh@21 -- # val=software 00:06:21.868 05:46:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.868 05:46:43 -- accel/accel.sh@23 -- # accel_module=software 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # IFS=: 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # read -r var val 00:06:21.868 05:46:43 -- accel/accel.sh@21 -- # val=32 00:06:21.868 05:46:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # IFS=: 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # read -r var val 00:06:21.868 05:46:43 -- accel/accel.sh@21 -- # val=32 00:06:21.868 05:46:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # IFS=: 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # read -r var val 00:06:21.868 05:46:43 -- accel/accel.sh@21 -- # val=1 00:06:21.868 05:46:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # IFS=: 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # read -r var val 00:06:21.868 05:46:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:21.868 05:46:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # IFS=: 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # read -r var val 00:06:21.868 05:46:43 -- accel/accel.sh@21 -- # val=Yes 00:06:21.868 05:46:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # IFS=: 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # read -r var val 00:06:21.868 05:46:43 -- accel/accel.sh@21 -- # val= 00:06:21.868 05:46:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # IFS=: 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # read -r var val 00:06:21.868 05:46:43 -- accel/accel.sh@21 -- # val= 00:06:21.868 05:46:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # IFS=: 00:06:21.868 05:46:43 -- accel/accel.sh@20 -- # read -r var val 00:06:22.804 05:46:44 -- accel/accel.sh@21 -- # val= 00:06:22.804 05:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.804 05:46:44 -- accel/accel.sh@20 -- # IFS=: 00:06:22.804 05:46:44 -- accel/accel.sh@20 -- # read -r var val 00:06:22.804 05:46:44 -- accel/accel.sh@21 -- # val= 00:06:22.804 05:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.804 05:46:44 -- accel/accel.sh@20 -- # IFS=: 00:06:22.804 05:46:44 -- accel/accel.sh@20 -- # read -r var val 00:06:22.804 05:46:44 -- accel/accel.sh@21 -- # val= 00:06:22.804 05:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.804 05:46:44 -- accel/accel.sh@20 -- # IFS=: 00:06:22.804 05:46:44 -- accel/accel.sh@20 -- # read -r var val 00:06:22.804 05:46:44 -- accel/accel.sh@21 -- # val= 00:06:22.804 05:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.804 05:46:44 -- accel/accel.sh@20 -- # IFS=: 00:06:22.804 05:46:44 -- accel/accel.sh@20 -- # read -r var val 00:06:22.804 05:46:44 -- accel/accel.sh@21 -- # val= 00:06:22.804 05:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.804 05:46:44 -- accel/accel.sh@20 -- # IFS=: 00:06:22.804 05:46:44 -- accel/accel.sh@20 -- # read -r var val 00:06:22.804 05:46:44 -- accel/accel.sh@21 -- # val= 00:06:22.804 05:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.804 05:46:44 -- accel/accel.sh@20 -- # IFS=: 00:06:22.804 05:46:44 -- accel/accel.sh@20 -- # read -r var val 00:06:22.804 05:46:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:22.804 05:46:44 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:22.804 05:46:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.804 00:06:22.804 real 0m2.614s 00:06:22.804 user 0m2.282s 00:06:22.804 sys 0m0.133s 00:06:22.804 05:46:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:22.804 05:46:44 -- common/autotest_common.sh@10 -- # set +x 00:06:22.804 ************************************ 00:06:22.804 END TEST accel_copy 00:06:22.804 ************************************ 00:06:23.063 05:46:44 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:23.063 05:46:44 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:23.063 05:46:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.063 05:46:44 -- common/autotest_common.sh@10 -- # set +x 00:06:23.063 ************************************ 00:06:23.063 START TEST accel_fill 00:06:23.063 ************************************ 00:06:23.063 05:46:44 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:23.063 05:46:44 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.063 05:46:44 -- accel/accel.sh@17 -- # local accel_module 00:06:23.063 05:46:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:23.063 05:46:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:23.063 05:46:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.063 05:46:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.063 05:46:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.063 05:46:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.063 05:46:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.063 05:46:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.063 05:46:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.063 05:46:44 -- accel/accel.sh@42 -- # jq -r . 00:06:23.063 [2024-12-15 05:46:44.483386] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:23.063 [2024-12-15 05:46:44.483488] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68051 ] 00:06:23.063 [2024-12-15 05:46:44.610011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.063 [2024-12-15 05:46:44.640476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.440 05:46:45 -- accel/accel.sh@18 -- # out=' 00:06:24.440 SPDK Configuration: 00:06:24.440 Core mask: 0x1 00:06:24.440 00:06:24.440 Accel Perf Configuration: 00:06:24.440 Workload Type: fill 00:06:24.440 Fill pattern: 0x80 00:06:24.440 Transfer size: 4096 bytes 00:06:24.440 Vector count 1 00:06:24.440 Module: software 00:06:24.440 Queue depth: 64 00:06:24.440 Allocate depth: 64 00:06:24.440 # threads/core: 1 00:06:24.440 Run time: 1 seconds 00:06:24.440 Verify: Yes 00:06:24.440 00:06:24.440 Running for 1 seconds... 00:06:24.440 00:06:24.440 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:24.440 ------------------------------------------------------------------------------------ 00:06:24.440 0,0 530560/s 2072 MiB/s 0 0 00:06:24.440 ==================================================================================== 00:06:24.440 Total 530560/s 2072 MiB/s 0 0' 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # IFS=: 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # read -r var val 00:06:24.440 05:46:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.440 05:46:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.440 05:46:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.440 05:46:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.440 05:46:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.440 05:46:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.440 05:46:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.440 05:46:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.440 05:46:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.440 05:46:45 -- accel/accel.sh@42 -- # jq -r . 00:06:24.440 [2024-12-15 05:46:45.780444] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:24.440 [2024-12-15 05:46:45.780988] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68065 ] 00:06:24.440 [2024-12-15 05:46:45.915673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.440 [2024-12-15 05:46:45.946063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.440 05:46:45 -- accel/accel.sh@21 -- # val= 00:06:24.440 05:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # IFS=: 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # read -r var val 00:06:24.440 05:46:45 -- accel/accel.sh@21 -- # val= 00:06:24.440 05:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # IFS=: 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # read -r var val 00:06:24.440 05:46:45 -- accel/accel.sh@21 -- # val=0x1 00:06:24.440 05:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # IFS=: 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # read -r var val 00:06:24.440 05:46:45 -- accel/accel.sh@21 -- # val= 00:06:24.440 05:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # IFS=: 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # read -r var val 00:06:24.440 05:46:45 -- accel/accel.sh@21 -- # val= 00:06:24.440 05:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # IFS=: 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # read -r var val 00:06:24.440 05:46:45 -- accel/accel.sh@21 -- # val=fill 00:06:24.440 05:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.440 05:46:45 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # IFS=: 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # read -r var val 00:06:24.440 05:46:45 -- accel/accel.sh@21 -- # val=0x80 00:06:24.440 05:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # IFS=: 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # read -r var val 00:06:24.440 05:46:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:24.440 05:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # IFS=: 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # read -r var val 00:06:24.440 05:46:45 -- accel/accel.sh@21 -- # val= 00:06:24.440 05:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # IFS=: 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # read -r var val 00:06:24.440 05:46:45 -- accel/accel.sh@21 -- # val=software 00:06:24.440 05:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.440 05:46:45 -- accel/accel.sh@23 -- # accel_module=software 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # IFS=: 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # read -r var val 00:06:24.440 05:46:45 -- accel/accel.sh@21 -- # val=64 00:06:24.440 05:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # IFS=: 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # read -r var val 00:06:24.440 05:46:45 -- accel/accel.sh@21 -- # val=64 00:06:24.440 05:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # IFS=: 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # read -r var val 00:06:24.440 05:46:45 -- accel/accel.sh@21 -- # val=1 00:06:24.440 05:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # IFS=: 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # read -r var val 00:06:24.440 05:46:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:24.440 05:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # IFS=: 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # read -r var val 00:06:24.440 05:46:45 -- accel/accel.sh@21 -- # val=Yes 00:06:24.440 05:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # IFS=: 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # read -r var val 00:06:24.440 05:46:45 -- accel/accel.sh@21 -- # val= 00:06:24.440 05:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # IFS=: 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # read -r var val 00:06:24.440 05:46:45 -- accel/accel.sh@21 -- # val= 00:06:24.440 05:46:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # IFS=: 00:06:24.440 05:46:45 -- accel/accel.sh@20 -- # read -r var val 00:06:25.818 05:46:47 -- accel/accel.sh@21 -- # val= 00:06:25.818 05:46:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.818 05:46:47 -- accel/accel.sh@20 -- # IFS=: 00:06:25.818 05:46:47 -- accel/accel.sh@20 -- # read -r var val 00:06:25.818 05:46:47 -- accel/accel.sh@21 -- # val= 00:06:25.818 05:46:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.818 05:46:47 -- accel/accel.sh@20 -- # IFS=: 00:06:25.818 05:46:47 -- accel/accel.sh@20 -- # read -r var val 00:06:25.818 05:46:47 -- accel/accel.sh@21 -- # val= 00:06:25.818 05:46:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.818 05:46:47 -- accel/accel.sh@20 -- # IFS=: 00:06:25.818 05:46:47 -- accel/accel.sh@20 -- # read -r var val 00:06:25.818 05:46:47 -- accel/accel.sh@21 -- # val= 00:06:25.818 05:46:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.818 05:46:47 -- accel/accel.sh@20 -- # IFS=: 00:06:25.818 05:46:47 -- accel/accel.sh@20 -- # read -r var val 00:06:25.818 05:46:47 -- accel/accel.sh@21 -- # val= 00:06:25.818 05:46:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.818 05:46:47 -- accel/accel.sh@20 -- # IFS=: 00:06:25.818 05:46:47 -- accel/accel.sh@20 -- # read -r var val 00:06:25.818 05:46:47 -- accel/accel.sh@21 -- # val= 00:06:25.818 05:46:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.818 05:46:47 -- accel/accel.sh@20 -- # IFS=: 00:06:25.818 05:46:47 -- accel/accel.sh@20 -- # read -r var val 00:06:25.818 05:46:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:25.818 05:46:47 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:25.818 05:46:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.818 00:06:25.818 real 0m2.604s 00:06:25.818 user 0m2.262s 00:06:25.818 sys 0m0.143s 00:06:25.818 05:46:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.818 ************************************ 00:06:25.818 05:46:47 -- common/autotest_common.sh@10 -- # set +x 00:06:25.818 END TEST accel_fill 00:06:25.818 ************************************ 00:06:25.818 05:46:47 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:25.818 05:46:47 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:25.818 05:46:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.818 05:46:47 -- common/autotest_common.sh@10 -- # set +x 00:06:25.818 ************************************ 00:06:25.818 START TEST accel_copy_crc32c 00:06:25.818 ************************************ 00:06:25.818 05:46:47 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:25.818 05:46:47 -- accel/accel.sh@16 -- # local accel_opc 00:06:25.818 05:46:47 -- accel/accel.sh@17 -- # local accel_module 00:06:25.818 05:46:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:25.818 05:46:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:25.818 05:46:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.818 05:46:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.818 05:46:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.818 05:46:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.818 05:46:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.818 05:46:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.818 05:46:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.818 05:46:47 -- accel/accel.sh@42 -- # jq -r . 00:06:25.818 [2024-12-15 05:46:47.142031] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:25.818 [2024-12-15 05:46:47.142119] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68100 ] 00:06:25.818 [2024-12-15 05:46:47.278490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.818 [2024-12-15 05:46:47.313388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.196 05:46:48 -- accel/accel.sh@18 -- # out=' 00:06:27.196 SPDK Configuration: 00:06:27.196 Core mask: 0x1 00:06:27.196 00:06:27.196 Accel Perf Configuration: 00:06:27.196 Workload Type: copy_crc32c 00:06:27.196 CRC-32C seed: 0 00:06:27.196 Vector size: 4096 bytes 00:06:27.196 Transfer size: 4096 bytes 00:06:27.196 Vector count 1 00:06:27.196 Module: software 00:06:27.196 Queue depth: 32 00:06:27.196 Allocate depth: 32 00:06:27.196 # threads/core: 1 00:06:27.196 Run time: 1 seconds 00:06:27.196 Verify: Yes 00:06:27.196 00:06:27.196 Running for 1 seconds... 00:06:27.196 00:06:27.196 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:27.196 ------------------------------------------------------------------------------------ 00:06:27.196 0,0 285632/s 1115 MiB/s 0 0 00:06:27.196 ==================================================================================== 00:06:27.196 Total 285632/s 1115 MiB/s 0 0' 00:06:27.196 05:46:48 -- accel/accel.sh@20 -- # IFS=: 00:06:27.196 05:46:48 -- accel/accel.sh@20 -- # read -r var val 00:06:27.196 05:46:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:27.196 05:46:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:27.196 05:46:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.196 05:46:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.196 05:46:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.196 05:46:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.196 05:46:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.196 05:46:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.196 05:46:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.196 05:46:48 -- accel/accel.sh@42 -- # jq -r . 00:06:27.196 [2024-12-15 05:46:48.451401] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:27.196 [2024-12-15 05:46:48.451662] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68121 ] 00:06:27.196 [2024-12-15 05:46:48.587431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.196 [2024-12-15 05:46:48.618432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.196 05:46:48 -- accel/accel.sh@21 -- # val= 00:06:27.196 05:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.196 05:46:48 -- accel/accel.sh@20 -- # IFS=: 00:06:27.196 05:46:48 -- accel/accel.sh@20 -- # read -r var val 00:06:27.196 05:46:48 -- accel/accel.sh@21 -- # val= 00:06:27.196 05:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.196 05:46:48 -- accel/accel.sh@20 -- # IFS=: 00:06:27.196 05:46:48 -- accel/accel.sh@20 -- # read -r var val 00:06:27.196 05:46:48 -- accel/accel.sh@21 -- # val=0x1 00:06:27.196 05:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.196 05:46:48 -- accel/accel.sh@20 -- # IFS=: 00:06:27.196 05:46:48 -- accel/accel.sh@20 -- # read -r var val 00:06:27.196 05:46:48 -- accel/accel.sh@21 -- # val= 00:06:27.196 05:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.196 05:46:48 -- accel/accel.sh@20 -- # IFS=: 00:06:27.196 05:46:48 -- accel/accel.sh@20 -- # read -r var val 00:06:27.196 05:46:48 -- accel/accel.sh@21 -- # val= 00:06:27.196 05:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.196 05:46:48 -- accel/accel.sh@20 -- # IFS=: 00:06:27.196 05:46:48 -- accel/accel.sh@20 -- # read -r var val 00:06:27.196 05:46:48 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:27.196 05:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.196 05:46:48 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:27.196 05:46:48 -- accel/accel.sh@20 -- # IFS=: 00:06:27.196 05:46:48 -- accel/accel.sh@20 -- # read -r var val 00:06:27.196 05:46:48 -- accel/accel.sh@21 -- # val=0 00:06:27.196 05:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.196 05:46:48 -- accel/accel.sh@20 -- # IFS=: 00:06:27.196 05:46:48 -- accel/accel.sh@20 -- # read -r var val 00:06:27.196 05:46:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:27.196 05:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.196 05:46:48 -- accel/accel.sh@20 -- # IFS=: 00:06:27.196 05:46:48 -- accel/accel.sh@20 -- # read -r var val 00:06:27.196 05:46:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:27.196 05:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.196 05:46:48 -- accel/accel.sh@20 -- # IFS=: 00:06:27.196 05:46:48 -- accel/accel.sh@20 -- # read -r var val 00:06:27.196 05:46:48 -- accel/accel.sh@21 -- # val= 00:06:27.196 05:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.196 05:46:48 -- accel/accel.sh@20 -- # IFS=: 00:06:27.196 05:46:48 -- accel/accel.sh@20 -- # read -r var val 00:06:27.196 05:46:48 -- accel/accel.sh@21 -- # val=software 00:06:27.196 05:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.196 05:46:48 -- accel/accel.sh@23 -- # accel_module=software 00:06:27.196 05:46:48 -- accel/accel.sh@20 -- # IFS=: 00:06:27.196 05:46:48 -- accel/accel.sh@20 -- # read -r var val 00:06:27.196 05:46:48 -- accel/accel.sh@21 -- # val=32 00:06:27.196 05:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.197 05:46:48 -- accel/accel.sh@20 -- # IFS=: 00:06:27.197 05:46:48 -- accel/accel.sh@20 -- # read -r var val 00:06:27.197 05:46:48 -- accel/accel.sh@21 -- # val=32 00:06:27.197 05:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.197 05:46:48 -- accel/accel.sh@20 -- # IFS=: 00:06:27.197 05:46:48 -- accel/accel.sh@20 -- # read -r var val 00:06:27.197 05:46:48 -- accel/accel.sh@21 -- # val=1 00:06:27.197 05:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.197 05:46:48 -- accel/accel.sh@20 -- # IFS=: 00:06:27.197 05:46:48 -- accel/accel.sh@20 -- # read -r var val 00:06:27.197 05:46:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:27.197 05:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.197 05:46:48 -- accel/accel.sh@20 -- # IFS=: 00:06:27.197 05:46:48 -- accel/accel.sh@20 -- # read -r var val 00:06:27.197 05:46:48 -- accel/accel.sh@21 -- # val=Yes 00:06:27.197 05:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.197 05:46:48 -- accel/accel.sh@20 -- # IFS=: 00:06:27.197 05:46:48 -- accel/accel.sh@20 -- # read -r var val 00:06:27.197 05:46:48 -- accel/accel.sh@21 -- # val= 00:06:27.197 05:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.197 05:46:48 -- accel/accel.sh@20 -- # IFS=: 00:06:27.197 05:46:48 -- accel/accel.sh@20 -- # read -r var val 00:06:27.197 05:46:48 -- accel/accel.sh@21 -- # val= 00:06:27.197 05:46:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.197 05:46:48 -- accel/accel.sh@20 -- # IFS=: 00:06:27.197 05:46:48 -- accel/accel.sh@20 -- # read -r var val 00:06:28.133 05:46:49 -- accel/accel.sh@21 -- # val= 00:06:28.133 05:46:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.133 05:46:49 -- accel/accel.sh@20 -- # IFS=: 00:06:28.133 05:46:49 -- accel/accel.sh@20 -- # read -r var val 00:06:28.133 05:46:49 -- accel/accel.sh@21 -- # val= 00:06:28.133 05:46:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.133 05:46:49 -- accel/accel.sh@20 -- # IFS=: 00:06:28.133 05:46:49 -- accel/accel.sh@20 -- # read -r var val 00:06:28.133 05:46:49 -- accel/accel.sh@21 -- # val= 00:06:28.133 05:46:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.133 05:46:49 -- accel/accel.sh@20 -- # IFS=: 00:06:28.133 05:46:49 -- accel/accel.sh@20 -- # read -r var val 00:06:28.133 05:46:49 -- accel/accel.sh@21 -- # val= 00:06:28.133 05:46:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.133 05:46:49 -- accel/accel.sh@20 -- # IFS=: 00:06:28.133 05:46:49 -- accel/accel.sh@20 -- # read -r var val 00:06:28.133 05:46:49 -- accel/accel.sh@21 -- # val= 00:06:28.133 05:46:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.133 05:46:49 -- accel/accel.sh@20 -- # IFS=: 00:06:28.133 05:46:49 -- accel/accel.sh@20 -- # read -r var val 00:06:28.133 05:46:49 -- accel/accel.sh@21 -- # val= 00:06:28.133 05:46:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.133 05:46:49 -- accel/accel.sh@20 -- # IFS=: 00:06:28.133 05:46:49 -- accel/accel.sh@20 -- # read -r var val 00:06:28.133 05:46:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:28.133 05:46:49 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:28.133 05:46:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.133 00:06:28.133 real 0m2.622s 00:06:28.133 user 0m2.276s 00:06:28.133 sys 0m0.142s 00:06:28.133 ************************************ 00:06:28.133 END TEST accel_copy_crc32c 00:06:28.133 ************************************ 00:06:28.133 05:46:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.133 05:46:49 -- common/autotest_common.sh@10 -- # set +x 00:06:28.393 05:46:49 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:28.393 05:46:49 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:28.393 05:46:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.393 05:46:49 -- common/autotest_common.sh@10 -- # set +x 00:06:28.393 ************************************ 00:06:28.393 START TEST accel_copy_crc32c_C2 00:06:28.393 ************************************ 00:06:28.393 05:46:49 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:28.393 05:46:49 -- accel/accel.sh@16 -- # local accel_opc 00:06:28.393 05:46:49 -- accel/accel.sh@17 -- # local accel_module 00:06:28.393 05:46:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:28.393 05:46:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:28.393 05:46:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.393 05:46:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.393 05:46:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.393 05:46:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.393 05:46:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.393 05:46:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.393 05:46:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.393 05:46:49 -- accel/accel.sh@42 -- # jq -r . 00:06:28.393 [2024-12-15 05:46:49.817470] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:28.393 [2024-12-15 05:46:49.818005] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68150 ] 00:06:28.393 [2024-12-15 05:46:49.952752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.393 [2024-12-15 05:46:49.983711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.771 05:46:51 -- accel/accel.sh@18 -- # out=' 00:06:29.771 SPDK Configuration: 00:06:29.771 Core mask: 0x1 00:06:29.771 00:06:29.771 Accel Perf Configuration: 00:06:29.771 Workload Type: copy_crc32c 00:06:29.771 CRC-32C seed: 0 00:06:29.771 Vector size: 4096 bytes 00:06:29.771 Transfer size: 8192 bytes 00:06:29.771 Vector count 2 00:06:29.771 Module: software 00:06:29.771 Queue depth: 32 00:06:29.771 Allocate depth: 32 00:06:29.771 # threads/core: 1 00:06:29.771 Run time: 1 seconds 00:06:29.771 Verify: Yes 00:06:29.771 00:06:29.771 Running for 1 seconds... 00:06:29.771 00:06:29.771 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:29.771 ------------------------------------------------------------------------------------ 00:06:29.771 0,0 203904/s 1593 MiB/s 0 0 00:06:29.771 ==================================================================================== 00:06:29.771 Total 203904/s 796 MiB/s 0 0' 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # IFS=: 00:06:29.771 05:46:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # read -r var val 00:06:29.771 05:46:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:29.771 05:46:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.771 05:46:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.771 05:46:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.771 05:46:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.771 05:46:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.771 05:46:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.771 05:46:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.771 05:46:51 -- accel/accel.sh@42 -- # jq -r . 00:06:29.771 [2024-12-15 05:46:51.128108] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:29.771 [2024-12-15 05:46:51.128211] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68165 ] 00:06:29.771 [2024-12-15 05:46:51.264260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.771 [2024-12-15 05:46:51.295329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.771 05:46:51 -- accel/accel.sh@21 -- # val= 00:06:29.771 05:46:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # IFS=: 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # read -r var val 00:06:29.771 05:46:51 -- accel/accel.sh@21 -- # val= 00:06:29.771 05:46:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # IFS=: 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # read -r var val 00:06:29.771 05:46:51 -- accel/accel.sh@21 -- # val=0x1 00:06:29.771 05:46:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # IFS=: 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # read -r var val 00:06:29.771 05:46:51 -- accel/accel.sh@21 -- # val= 00:06:29.771 05:46:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # IFS=: 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # read -r var val 00:06:29.771 05:46:51 -- accel/accel.sh@21 -- # val= 00:06:29.771 05:46:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # IFS=: 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # read -r var val 00:06:29.771 05:46:51 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:29.771 05:46:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.771 05:46:51 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # IFS=: 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # read -r var val 00:06:29.771 05:46:51 -- accel/accel.sh@21 -- # val=0 00:06:29.771 05:46:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # IFS=: 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # read -r var val 00:06:29.771 05:46:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:29.771 05:46:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # IFS=: 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # read -r var val 00:06:29.771 05:46:51 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:29.771 05:46:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # IFS=: 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # read -r var val 00:06:29.771 05:46:51 -- accel/accel.sh@21 -- # val= 00:06:29.771 05:46:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # IFS=: 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # read -r var val 00:06:29.771 05:46:51 -- accel/accel.sh@21 -- # val=software 00:06:29.771 05:46:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.771 05:46:51 -- accel/accel.sh@23 -- # accel_module=software 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # IFS=: 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # read -r var val 00:06:29.771 05:46:51 -- accel/accel.sh@21 -- # val=32 00:06:29.771 05:46:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # IFS=: 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # read -r var val 00:06:29.771 05:46:51 -- accel/accel.sh@21 -- # val=32 00:06:29.771 05:46:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # IFS=: 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # read -r var val 00:06:29.771 05:46:51 -- accel/accel.sh@21 -- # val=1 00:06:29.771 05:46:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # IFS=: 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # read -r var val 00:06:29.771 05:46:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:29.771 05:46:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # IFS=: 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # read -r var val 00:06:29.771 05:46:51 -- accel/accel.sh@21 -- # val=Yes 00:06:29.771 05:46:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # IFS=: 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # read -r var val 00:06:29.771 05:46:51 -- accel/accel.sh@21 -- # val= 00:06:29.771 05:46:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # IFS=: 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # read -r var val 00:06:29.771 05:46:51 -- accel/accel.sh@21 -- # val= 00:06:29.771 05:46:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # IFS=: 00:06:29.771 05:46:51 -- accel/accel.sh@20 -- # read -r var val 00:06:31.149 05:46:52 -- accel/accel.sh@21 -- # val= 00:06:31.149 05:46:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.149 05:46:52 -- accel/accel.sh@20 -- # IFS=: 00:06:31.149 05:46:52 -- accel/accel.sh@20 -- # read -r var val 00:06:31.149 05:46:52 -- accel/accel.sh@21 -- # val= 00:06:31.149 05:46:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.149 05:46:52 -- accel/accel.sh@20 -- # IFS=: 00:06:31.149 05:46:52 -- accel/accel.sh@20 -- # read -r var val 00:06:31.149 05:46:52 -- accel/accel.sh@21 -- # val= 00:06:31.149 05:46:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.149 05:46:52 -- accel/accel.sh@20 -- # IFS=: 00:06:31.149 05:46:52 -- accel/accel.sh@20 -- # read -r var val 00:06:31.149 05:46:52 -- accel/accel.sh@21 -- # val= 00:06:31.149 05:46:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.149 05:46:52 -- accel/accel.sh@20 -- # IFS=: 00:06:31.149 05:46:52 -- accel/accel.sh@20 -- # read -r var val 00:06:31.149 05:46:52 -- accel/accel.sh@21 -- # val= 00:06:31.149 05:46:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.149 05:46:52 -- accel/accel.sh@20 -- # IFS=: 00:06:31.149 05:46:52 -- accel/accel.sh@20 -- # read -r var val 00:06:31.149 05:46:52 -- accel/accel.sh@21 -- # val= 00:06:31.149 05:46:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.149 05:46:52 -- accel/accel.sh@20 -- # IFS=: 00:06:31.149 05:46:52 -- accel/accel.sh@20 -- # read -r var val 00:06:31.149 05:46:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:31.149 05:46:52 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:31.149 05:46:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.149 00:06:31.149 real 0m2.626s 00:06:31.149 user 0m2.283s 00:06:31.149 sys 0m0.143s 00:06:31.149 05:46:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:31.149 05:46:52 -- common/autotest_common.sh@10 -- # set +x 00:06:31.149 ************************************ 00:06:31.149 END TEST accel_copy_crc32c_C2 00:06:31.149 ************************************ 00:06:31.149 05:46:52 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:31.149 05:46:52 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:31.149 05:46:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.149 05:46:52 -- common/autotest_common.sh@10 -- # set +x 00:06:31.149 ************************************ 00:06:31.149 START TEST accel_dualcast 00:06:31.149 ************************************ 00:06:31.149 05:46:52 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:31.149 05:46:52 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.149 05:46:52 -- accel/accel.sh@17 -- # local accel_module 00:06:31.149 05:46:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:31.149 05:46:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:31.149 05:46:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.149 05:46:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.149 05:46:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.149 05:46:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.149 05:46:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.149 05:46:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.149 05:46:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.149 05:46:52 -- accel/accel.sh@42 -- # jq -r . 00:06:31.149 [2024-12-15 05:46:52.494495] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:31.149 [2024-12-15 05:46:52.494591] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68204 ] 00:06:31.149 [2024-12-15 05:46:52.625358] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.149 [2024-12-15 05:46:52.656307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.528 05:46:53 -- accel/accel.sh@18 -- # out=' 00:06:32.528 SPDK Configuration: 00:06:32.528 Core mask: 0x1 00:06:32.528 00:06:32.528 Accel Perf Configuration: 00:06:32.528 Workload Type: dualcast 00:06:32.528 Transfer size: 4096 bytes 00:06:32.528 Vector count 1 00:06:32.528 Module: software 00:06:32.528 Queue depth: 32 00:06:32.528 Allocate depth: 32 00:06:32.528 # threads/core: 1 00:06:32.528 Run time: 1 seconds 00:06:32.528 Verify: Yes 00:06:32.528 00:06:32.528 Running for 1 seconds... 00:06:32.528 00:06:32.528 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:32.528 ------------------------------------------------------------------------------------ 00:06:32.528 0,0 389088/s 1519 MiB/s 0 0 00:06:32.528 ==================================================================================== 00:06:32.528 Total 389088/s 1519 MiB/s 0 0' 00:06:32.528 05:46:53 -- accel/accel.sh@20 -- # IFS=: 00:06:32.528 05:46:53 -- accel/accel.sh@20 -- # read -r var val 00:06:32.528 05:46:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:32.528 05:46:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:32.528 05:46:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.528 05:46:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.528 05:46:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.528 05:46:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.528 05:46:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.528 05:46:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.528 05:46:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.528 05:46:53 -- accel/accel.sh@42 -- # jq -r . 00:06:32.528 [2024-12-15 05:46:53.798811] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:32.528 [2024-12-15 05:46:53.799638] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68218 ] 00:06:32.528 [2024-12-15 05:46:53.935073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.528 [2024-12-15 05:46:53.968182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.528 05:46:53 -- accel/accel.sh@21 -- # val= 00:06:32.528 05:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.528 05:46:53 -- accel/accel.sh@20 -- # IFS=: 00:06:32.528 05:46:53 -- accel/accel.sh@20 -- # read -r var val 00:06:32.528 05:46:53 -- accel/accel.sh@21 -- # val= 00:06:32.528 05:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.528 05:46:53 -- accel/accel.sh@20 -- # IFS=: 00:06:32.528 05:46:53 -- accel/accel.sh@20 -- # read -r var val 00:06:32.528 05:46:53 -- accel/accel.sh@21 -- # val=0x1 00:06:32.528 05:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.528 05:46:53 -- accel/accel.sh@20 -- # IFS=: 00:06:32.528 05:46:53 -- accel/accel.sh@20 -- # read -r var val 00:06:32.528 05:46:53 -- accel/accel.sh@21 -- # val= 00:06:32.528 05:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.528 05:46:53 -- accel/accel.sh@20 -- # IFS=: 00:06:32.528 05:46:53 -- accel/accel.sh@20 -- # read -r var val 00:06:32.528 05:46:53 -- accel/accel.sh@21 -- # val= 00:06:32.528 05:46:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.528 05:46:53 -- accel/accel.sh@20 -- # IFS=: 00:06:32.528 05:46:54 -- accel/accel.sh@20 -- # read -r var val 00:06:32.528 05:46:54 -- accel/accel.sh@21 -- # val=dualcast 00:06:32.528 05:46:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.528 05:46:54 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:32.528 05:46:54 -- accel/accel.sh@20 -- # IFS=: 00:06:32.528 05:46:54 -- accel/accel.sh@20 -- # read -r var val 00:06:32.528 05:46:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:32.528 05:46:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.528 05:46:54 -- accel/accel.sh@20 -- # IFS=: 00:06:32.528 05:46:54 -- accel/accel.sh@20 -- # read -r var val 00:06:32.528 05:46:54 -- accel/accel.sh@21 -- # val= 00:06:32.528 05:46:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.528 05:46:54 -- accel/accel.sh@20 -- # IFS=: 00:06:32.528 05:46:54 -- accel/accel.sh@20 -- # read -r var val 00:06:32.528 05:46:54 -- accel/accel.sh@21 -- # val=software 00:06:32.528 05:46:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.528 05:46:54 -- accel/accel.sh@23 -- # accel_module=software 00:06:32.528 05:46:54 -- accel/accel.sh@20 -- # IFS=: 00:06:32.528 05:46:54 -- accel/accel.sh@20 -- # read -r var val 00:06:32.528 05:46:54 -- accel/accel.sh@21 -- # val=32 00:06:32.528 05:46:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.528 05:46:54 -- accel/accel.sh@20 -- # IFS=: 00:06:32.528 05:46:54 -- accel/accel.sh@20 -- # read -r var val 00:06:32.528 05:46:54 -- accel/accel.sh@21 -- # val=32 00:06:32.528 05:46:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.528 05:46:54 -- accel/accel.sh@20 -- # IFS=: 00:06:32.528 05:46:54 -- accel/accel.sh@20 -- # read -r var val 00:06:32.528 05:46:54 -- accel/accel.sh@21 -- # val=1 00:06:32.528 05:46:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.528 05:46:54 -- accel/accel.sh@20 -- # IFS=: 00:06:32.528 05:46:54 -- accel/accel.sh@20 -- # read -r var val 00:06:32.528 05:46:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:32.528 05:46:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.528 05:46:54 -- accel/accel.sh@20 -- # IFS=: 00:06:32.528 05:46:54 -- accel/accel.sh@20 -- # read -r var val 00:06:32.528 05:46:54 -- accel/accel.sh@21 -- # val=Yes 00:06:32.528 05:46:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.528 05:46:54 -- accel/accel.sh@20 -- # IFS=: 00:06:32.528 05:46:54 -- accel/accel.sh@20 -- # read -r var val 00:06:32.528 05:46:54 -- accel/accel.sh@21 -- # val= 00:06:32.528 05:46:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.528 05:46:54 -- accel/accel.sh@20 -- # IFS=: 00:06:32.528 05:46:54 -- accel/accel.sh@20 -- # read -r var val 00:06:32.528 05:46:54 -- accel/accel.sh@21 -- # val= 00:06:32.528 05:46:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.528 05:46:54 -- accel/accel.sh@20 -- # IFS=: 00:06:32.528 05:46:54 -- accel/accel.sh@20 -- # read -r var val 00:06:33.464 05:46:55 -- accel/accel.sh@21 -- # val= 00:06:33.464 05:46:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.464 05:46:55 -- accel/accel.sh@20 -- # IFS=: 00:06:33.464 05:46:55 -- accel/accel.sh@20 -- # read -r var val 00:06:33.464 05:46:55 -- accel/accel.sh@21 -- # val= 00:06:33.464 05:46:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.464 05:46:55 -- accel/accel.sh@20 -- # IFS=: 00:06:33.464 05:46:55 -- accel/accel.sh@20 -- # read -r var val 00:06:33.464 05:46:55 -- accel/accel.sh@21 -- # val= 00:06:33.464 05:46:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.464 05:46:55 -- accel/accel.sh@20 -- # IFS=: 00:06:33.464 05:46:55 -- accel/accel.sh@20 -- # read -r var val 00:06:33.464 05:46:55 -- accel/accel.sh@21 -- # val= 00:06:33.465 05:46:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.465 05:46:55 -- accel/accel.sh@20 -- # IFS=: 00:06:33.465 05:46:55 -- accel/accel.sh@20 -- # read -r var val 00:06:33.465 05:46:55 -- accel/accel.sh@21 -- # val= 00:06:33.465 05:46:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.465 05:46:55 -- accel/accel.sh@20 -- # IFS=: 00:06:33.465 05:46:55 -- accel/accel.sh@20 -- # read -r var val 00:06:33.465 05:46:55 -- accel/accel.sh@21 -- # val= 00:06:33.465 05:46:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.465 05:46:55 -- accel/accel.sh@20 -- # IFS=: 00:06:33.465 05:46:55 -- accel/accel.sh@20 -- # read -r var val 00:06:33.465 05:46:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:33.465 05:46:55 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:33.465 05:46:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.465 00:06:33.465 real 0m2.620s 00:06:33.465 user 0m2.278s 00:06:33.465 sys 0m0.140s 00:06:33.465 05:46:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:33.465 05:46:55 -- common/autotest_common.sh@10 -- # set +x 00:06:33.465 ************************************ 00:06:33.465 END TEST accel_dualcast 00:06:33.465 ************************************ 00:06:33.723 05:46:55 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:33.723 05:46:55 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:33.723 05:46:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.723 05:46:55 -- common/autotest_common.sh@10 -- # set +x 00:06:33.723 ************************************ 00:06:33.723 START TEST accel_compare 00:06:33.723 ************************************ 00:06:33.723 05:46:55 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:33.723 05:46:55 -- accel/accel.sh@16 -- # local accel_opc 00:06:33.723 05:46:55 -- accel/accel.sh@17 -- # local accel_module 00:06:33.723 05:46:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:33.723 05:46:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:33.723 05:46:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.723 05:46:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.723 05:46:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.723 05:46:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.723 05:46:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.723 05:46:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.723 05:46:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.723 05:46:55 -- accel/accel.sh@42 -- # jq -r . 00:06:33.723 [2024-12-15 05:46:55.174554] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:33.723 [2024-12-15 05:46:55.174644] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68247 ] 00:06:33.723 [2024-12-15 05:46:55.311846] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.723 [2024-12-15 05:46:55.344210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.100 05:46:56 -- accel/accel.sh@18 -- # out=' 00:06:35.100 SPDK Configuration: 00:06:35.100 Core mask: 0x1 00:06:35.100 00:06:35.100 Accel Perf Configuration: 00:06:35.100 Workload Type: compare 00:06:35.100 Transfer size: 4096 bytes 00:06:35.100 Vector count 1 00:06:35.100 Module: software 00:06:35.100 Queue depth: 32 00:06:35.100 Allocate depth: 32 00:06:35.100 # threads/core: 1 00:06:35.100 Run time: 1 seconds 00:06:35.100 Verify: Yes 00:06:35.100 00:06:35.100 Running for 1 seconds... 00:06:35.100 00:06:35.100 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:35.100 ------------------------------------------------------------------------------------ 00:06:35.100 0,0 505312/s 1973 MiB/s 0 0 00:06:35.100 ==================================================================================== 00:06:35.100 Total 505312/s 1973 MiB/s 0 0' 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # IFS=: 00:06:35.100 05:46:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # read -r var val 00:06:35.100 05:46:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:35.100 05:46:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.100 05:46:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.100 05:46:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.100 05:46:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.100 05:46:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.100 05:46:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.100 05:46:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.100 05:46:56 -- accel/accel.sh@42 -- # jq -r . 00:06:35.100 [2024-12-15 05:46:56.489167] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:35.100 [2024-12-15 05:46:56.489257] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68269 ] 00:06:35.100 [2024-12-15 05:46:56.622773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.100 [2024-12-15 05:46:56.656930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.100 05:46:56 -- accel/accel.sh@21 -- # val= 00:06:35.100 05:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # IFS=: 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # read -r var val 00:06:35.100 05:46:56 -- accel/accel.sh@21 -- # val= 00:06:35.100 05:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # IFS=: 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # read -r var val 00:06:35.100 05:46:56 -- accel/accel.sh@21 -- # val=0x1 00:06:35.100 05:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # IFS=: 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # read -r var val 00:06:35.100 05:46:56 -- accel/accel.sh@21 -- # val= 00:06:35.100 05:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # IFS=: 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # read -r var val 00:06:35.100 05:46:56 -- accel/accel.sh@21 -- # val= 00:06:35.100 05:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # IFS=: 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # read -r var val 00:06:35.100 05:46:56 -- accel/accel.sh@21 -- # val=compare 00:06:35.100 05:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.100 05:46:56 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # IFS=: 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # read -r var val 00:06:35.100 05:46:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:35.100 05:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # IFS=: 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # read -r var val 00:06:35.100 05:46:56 -- accel/accel.sh@21 -- # val= 00:06:35.100 05:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # IFS=: 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # read -r var val 00:06:35.100 05:46:56 -- accel/accel.sh@21 -- # val=software 00:06:35.100 05:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.100 05:46:56 -- accel/accel.sh@23 -- # accel_module=software 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # IFS=: 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # read -r var val 00:06:35.100 05:46:56 -- accel/accel.sh@21 -- # val=32 00:06:35.100 05:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # IFS=: 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # read -r var val 00:06:35.100 05:46:56 -- accel/accel.sh@21 -- # val=32 00:06:35.100 05:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # IFS=: 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # read -r var val 00:06:35.100 05:46:56 -- accel/accel.sh@21 -- # val=1 00:06:35.100 05:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # IFS=: 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # read -r var val 00:06:35.100 05:46:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:35.100 05:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # IFS=: 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # read -r var val 00:06:35.100 05:46:56 -- accel/accel.sh@21 -- # val=Yes 00:06:35.100 05:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # IFS=: 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # read -r var val 00:06:35.100 05:46:56 -- accel/accel.sh@21 -- # val= 00:06:35.100 05:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # IFS=: 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # read -r var val 00:06:35.100 05:46:56 -- accel/accel.sh@21 -- # val= 00:06:35.100 05:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # IFS=: 00:06:35.100 05:46:56 -- accel/accel.sh@20 -- # read -r var val 00:06:36.478 05:46:57 -- accel/accel.sh@21 -- # val= 00:06:36.478 05:46:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.478 05:46:57 -- accel/accel.sh@20 -- # IFS=: 00:06:36.478 05:46:57 -- accel/accel.sh@20 -- # read -r var val 00:06:36.478 05:46:57 -- accel/accel.sh@21 -- # val= 00:06:36.478 05:46:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.478 05:46:57 -- accel/accel.sh@20 -- # IFS=: 00:06:36.478 05:46:57 -- accel/accel.sh@20 -- # read -r var val 00:06:36.478 05:46:57 -- accel/accel.sh@21 -- # val= 00:06:36.478 05:46:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.478 05:46:57 -- accel/accel.sh@20 -- # IFS=: 00:06:36.478 05:46:57 -- accel/accel.sh@20 -- # read -r var val 00:06:36.478 05:46:57 -- accel/accel.sh@21 -- # val= 00:06:36.478 05:46:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.478 05:46:57 -- accel/accel.sh@20 -- # IFS=: 00:06:36.478 05:46:57 -- accel/accel.sh@20 -- # read -r var val 00:06:36.478 05:46:57 -- accel/accel.sh@21 -- # val= 00:06:36.478 05:46:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.478 05:46:57 -- accel/accel.sh@20 -- # IFS=: 00:06:36.478 05:46:57 -- accel/accel.sh@20 -- # read -r var val 00:06:36.478 05:46:57 -- accel/accel.sh@21 -- # val= 00:06:36.478 05:46:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.478 05:46:57 -- accel/accel.sh@20 -- # IFS=: 00:06:36.478 05:46:57 -- accel/accel.sh@20 -- # read -r var val 00:06:36.478 05:46:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:36.478 05:46:57 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:36.478 05:46:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.478 00:06:36.478 real 0m2.642s 00:06:36.478 user 0m2.291s 00:06:36.478 sys 0m0.146s 00:06:36.478 ************************************ 00:06:36.478 END TEST accel_compare 00:06:36.478 ************************************ 00:06:36.479 05:46:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:36.479 05:46:57 -- common/autotest_common.sh@10 -- # set +x 00:06:36.479 05:46:57 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:36.479 05:46:57 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:36.479 05:46:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.479 05:46:57 -- common/autotest_common.sh@10 -- # set +x 00:06:36.479 ************************************ 00:06:36.479 START TEST accel_xor 00:06:36.479 ************************************ 00:06:36.479 05:46:57 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:36.479 05:46:57 -- accel/accel.sh@16 -- # local accel_opc 00:06:36.479 05:46:57 -- accel/accel.sh@17 -- # local accel_module 00:06:36.479 05:46:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:36.479 05:46:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:36.479 05:46:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.479 05:46:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.479 05:46:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.479 05:46:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.479 05:46:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.479 05:46:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.479 05:46:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.479 05:46:57 -- accel/accel.sh@42 -- # jq -r . 00:06:36.479 [2024-12-15 05:46:57.860810] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:36.479 [2024-12-15 05:46:57.861093] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68303 ] 00:06:36.479 [2024-12-15 05:46:57.997503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.479 [2024-12-15 05:46:58.029553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.859 05:46:59 -- accel/accel.sh@18 -- # out=' 00:06:37.859 SPDK Configuration: 00:06:37.859 Core mask: 0x1 00:06:37.859 00:06:37.859 Accel Perf Configuration: 00:06:37.859 Workload Type: xor 00:06:37.859 Source buffers: 2 00:06:37.860 Transfer size: 4096 bytes 00:06:37.860 Vector count 1 00:06:37.860 Module: software 00:06:37.860 Queue depth: 32 00:06:37.860 Allocate depth: 32 00:06:37.860 # threads/core: 1 00:06:37.860 Run time: 1 seconds 00:06:37.860 Verify: Yes 00:06:37.860 00:06:37.860 Running for 1 seconds... 00:06:37.860 00:06:37.860 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:37.860 ------------------------------------------------------------------------------------ 00:06:37.860 0,0 267968/s 1046 MiB/s 0 0 00:06:37.860 ==================================================================================== 00:06:37.860 Total 267968/s 1046 MiB/s 0 0' 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # IFS=: 00:06:37.860 05:46:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # read -r var val 00:06:37.860 05:46:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.860 05:46:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:37.860 05:46:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.860 05:46:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.860 05:46:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.860 05:46:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.860 05:46:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.860 05:46:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.860 05:46:59 -- accel/accel.sh@42 -- # jq -r . 00:06:37.860 [2024-12-15 05:46:59.168966] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:37.860 [2024-12-15 05:46:59.169735] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68317 ] 00:06:37.860 [2024-12-15 05:46:59.301503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.860 [2024-12-15 05:46:59.332239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.860 05:46:59 -- accel/accel.sh@21 -- # val= 00:06:37.860 05:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # IFS=: 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # read -r var val 00:06:37.860 05:46:59 -- accel/accel.sh@21 -- # val= 00:06:37.860 05:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # IFS=: 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # read -r var val 00:06:37.860 05:46:59 -- accel/accel.sh@21 -- # val=0x1 00:06:37.860 05:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # IFS=: 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # read -r var val 00:06:37.860 05:46:59 -- accel/accel.sh@21 -- # val= 00:06:37.860 05:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # IFS=: 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # read -r var val 00:06:37.860 05:46:59 -- accel/accel.sh@21 -- # val= 00:06:37.860 05:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # IFS=: 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # read -r var val 00:06:37.860 05:46:59 -- accel/accel.sh@21 -- # val=xor 00:06:37.860 05:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.860 05:46:59 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # IFS=: 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # read -r var val 00:06:37.860 05:46:59 -- accel/accel.sh@21 -- # val=2 00:06:37.860 05:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # IFS=: 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # read -r var val 00:06:37.860 05:46:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:37.860 05:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # IFS=: 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # read -r var val 00:06:37.860 05:46:59 -- accel/accel.sh@21 -- # val= 00:06:37.860 05:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # IFS=: 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # read -r var val 00:06:37.860 05:46:59 -- accel/accel.sh@21 -- # val=software 00:06:37.860 05:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.860 05:46:59 -- accel/accel.sh@23 -- # accel_module=software 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # IFS=: 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # read -r var val 00:06:37.860 05:46:59 -- accel/accel.sh@21 -- # val=32 00:06:37.860 05:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # IFS=: 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # read -r var val 00:06:37.860 05:46:59 -- accel/accel.sh@21 -- # val=32 00:06:37.860 05:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # IFS=: 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # read -r var val 00:06:37.860 05:46:59 -- accel/accel.sh@21 -- # val=1 00:06:37.860 05:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # IFS=: 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # read -r var val 00:06:37.860 05:46:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:37.860 05:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # IFS=: 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # read -r var val 00:06:37.860 05:46:59 -- accel/accel.sh@21 -- # val=Yes 00:06:37.860 05:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # IFS=: 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # read -r var val 00:06:37.860 05:46:59 -- accel/accel.sh@21 -- # val= 00:06:37.860 05:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # IFS=: 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # read -r var val 00:06:37.860 05:46:59 -- accel/accel.sh@21 -- # val= 00:06:37.860 05:46:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # IFS=: 00:06:37.860 05:46:59 -- accel/accel.sh@20 -- # read -r var val 00:06:39.238 05:47:00 -- accel/accel.sh@21 -- # val= 00:06:39.238 05:47:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.238 05:47:00 -- accel/accel.sh@20 -- # IFS=: 00:06:39.238 05:47:00 -- accel/accel.sh@20 -- # read -r var val 00:06:39.238 05:47:00 -- accel/accel.sh@21 -- # val= 00:06:39.238 05:47:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.238 05:47:00 -- accel/accel.sh@20 -- # IFS=: 00:06:39.238 05:47:00 -- accel/accel.sh@20 -- # read -r var val 00:06:39.239 05:47:00 -- accel/accel.sh@21 -- # val= 00:06:39.239 05:47:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.239 05:47:00 -- accel/accel.sh@20 -- # IFS=: 00:06:39.239 05:47:00 -- accel/accel.sh@20 -- # read -r var val 00:06:39.239 05:47:00 -- accel/accel.sh@21 -- # val= 00:06:39.239 05:47:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.239 05:47:00 -- accel/accel.sh@20 -- # IFS=: 00:06:39.239 05:47:00 -- accel/accel.sh@20 -- # read -r var val 00:06:39.239 05:47:00 -- accel/accel.sh@21 -- # val= 00:06:39.239 05:47:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.239 05:47:00 -- accel/accel.sh@20 -- # IFS=: 00:06:39.239 05:47:00 -- accel/accel.sh@20 -- # read -r var val 00:06:39.239 05:47:00 -- accel/accel.sh@21 -- # val= 00:06:39.239 05:47:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.239 05:47:00 -- accel/accel.sh@20 -- # IFS=: 00:06:39.239 05:47:00 -- accel/accel.sh@20 -- # read -r var val 00:06:39.239 05:47:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:39.239 05:47:00 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:39.239 05:47:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.239 00:06:39.239 real 0m2.617s 00:06:39.239 user 0m2.268s 00:06:39.239 sys 0m0.147s 00:06:39.239 05:47:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:39.239 05:47:00 -- common/autotest_common.sh@10 -- # set +x 00:06:39.239 ************************************ 00:06:39.239 END TEST accel_xor 00:06:39.239 ************************************ 00:06:39.239 05:47:00 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:39.239 05:47:00 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:39.239 05:47:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.239 05:47:00 -- common/autotest_common.sh@10 -- # set +x 00:06:39.239 ************************************ 00:06:39.239 START TEST accel_xor 00:06:39.239 ************************************ 00:06:39.239 05:47:00 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:39.239 05:47:00 -- accel/accel.sh@16 -- # local accel_opc 00:06:39.239 05:47:00 -- accel/accel.sh@17 -- # local accel_module 00:06:39.239 05:47:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:39.239 05:47:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:39.239 05:47:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.239 05:47:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.239 05:47:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.239 05:47:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.239 05:47:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.239 05:47:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.239 05:47:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.239 05:47:00 -- accel/accel.sh@42 -- # jq -r . 00:06:39.239 [2024-12-15 05:47:00.532441] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:39.239 [2024-12-15 05:47:00.532676] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68353 ] 00:06:39.239 [2024-12-15 05:47:00.668103] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.239 [2024-12-15 05:47:00.700108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.633 05:47:01 -- accel/accel.sh@18 -- # out=' 00:06:40.633 SPDK Configuration: 00:06:40.633 Core mask: 0x1 00:06:40.633 00:06:40.633 Accel Perf Configuration: 00:06:40.633 Workload Type: xor 00:06:40.633 Source buffers: 3 00:06:40.633 Transfer size: 4096 bytes 00:06:40.633 Vector count 1 00:06:40.633 Module: software 00:06:40.633 Queue depth: 32 00:06:40.633 Allocate depth: 32 00:06:40.633 # threads/core: 1 00:06:40.633 Run time: 1 seconds 00:06:40.633 Verify: Yes 00:06:40.633 00:06:40.633 Running for 1 seconds... 00:06:40.633 00:06:40.633 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:40.633 ------------------------------------------------------------------------------------ 00:06:40.633 0,0 250592/s 978 MiB/s 0 0 00:06:40.633 ==================================================================================== 00:06:40.633 Total 250592/s 978 MiB/s 0 0' 00:06:40.633 05:47:01 -- accel/accel.sh@20 -- # IFS=: 00:06:40.633 05:47:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:40.633 05:47:01 -- accel/accel.sh@20 -- # read -r var val 00:06:40.633 05:47:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:40.634 05:47:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.634 05:47:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.634 05:47:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.634 05:47:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.634 05:47:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.634 05:47:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.634 05:47:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.634 05:47:01 -- accel/accel.sh@42 -- # jq -r . 00:06:40.634 [2024-12-15 05:47:01.846121] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:40.634 [2024-12-15 05:47:01.846291] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68367 ] 00:06:40.634 [2024-12-15 05:47:01.982021] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.634 [2024-12-15 05:47:02.013490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.634 05:47:02 -- accel/accel.sh@21 -- # val= 00:06:40.634 05:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # IFS=: 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # read -r var val 00:06:40.634 05:47:02 -- accel/accel.sh@21 -- # val= 00:06:40.634 05:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # IFS=: 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # read -r var val 00:06:40.634 05:47:02 -- accel/accel.sh@21 -- # val=0x1 00:06:40.634 05:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # IFS=: 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # read -r var val 00:06:40.634 05:47:02 -- accel/accel.sh@21 -- # val= 00:06:40.634 05:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # IFS=: 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # read -r var val 00:06:40.634 05:47:02 -- accel/accel.sh@21 -- # val= 00:06:40.634 05:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # IFS=: 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # read -r var val 00:06:40.634 05:47:02 -- accel/accel.sh@21 -- # val=xor 00:06:40.634 05:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.634 05:47:02 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # IFS=: 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # read -r var val 00:06:40.634 05:47:02 -- accel/accel.sh@21 -- # val=3 00:06:40.634 05:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # IFS=: 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # read -r var val 00:06:40.634 05:47:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.634 05:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # IFS=: 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # read -r var val 00:06:40.634 05:47:02 -- accel/accel.sh@21 -- # val= 00:06:40.634 05:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # IFS=: 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # read -r var val 00:06:40.634 05:47:02 -- accel/accel.sh@21 -- # val=software 00:06:40.634 05:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.634 05:47:02 -- accel/accel.sh@23 -- # accel_module=software 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # IFS=: 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # read -r var val 00:06:40.634 05:47:02 -- accel/accel.sh@21 -- # val=32 00:06:40.634 05:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # IFS=: 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # read -r var val 00:06:40.634 05:47:02 -- accel/accel.sh@21 -- # val=32 00:06:40.634 05:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # IFS=: 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # read -r var val 00:06:40.634 05:47:02 -- accel/accel.sh@21 -- # val=1 00:06:40.634 05:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # IFS=: 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # read -r var val 00:06:40.634 05:47:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:40.634 05:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # IFS=: 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # read -r var val 00:06:40.634 05:47:02 -- accel/accel.sh@21 -- # val=Yes 00:06:40.634 05:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # IFS=: 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # read -r var val 00:06:40.634 05:47:02 -- accel/accel.sh@21 -- # val= 00:06:40.634 05:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # IFS=: 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # read -r var val 00:06:40.634 05:47:02 -- accel/accel.sh@21 -- # val= 00:06:40.634 05:47:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # IFS=: 00:06:40.634 05:47:02 -- accel/accel.sh@20 -- # read -r var val 00:06:41.571 05:47:03 -- accel/accel.sh@21 -- # val= 00:06:41.571 05:47:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.571 05:47:03 -- accel/accel.sh@20 -- # IFS=: 00:06:41.571 05:47:03 -- accel/accel.sh@20 -- # read -r var val 00:06:41.571 05:47:03 -- accel/accel.sh@21 -- # val= 00:06:41.571 05:47:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.571 05:47:03 -- accel/accel.sh@20 -- # IFS=: 00:06:41.571 05:47:03 -- accel/accel.sh@20 -- # read -r var val 00:06:41.571 05:47:03 -- accel/accel.sh@21 -- # val= 00:06:41.571 05:47:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.571 05:47:03 -- accel/accel.sh@20 -- # IFS=: 00:06:41.571 05:47:03 -- accel/accel.sh@20 -- # read -r var val 00:06:41.571 05:47:03 -- accel/accel.sh@21 -- # val= 00:06:41.571 05:47:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.571 05:47:03 -- accel/accel.sh@20 -- # IFS=: 00:06:41.571 05:47:03 -- accel/accel.sh@20 -- # read -r var val 00:06:41.571 05:47:03 -- accel/accel.sh@21 -- # val= 00:06:41.571 05:47:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.571 05:47:03 -- accel/accel.sh@20 -- # IFS=: 00:06:41.571 05:47:03 -- accel/accel.sh@20 -- # read -r var val 00:06:41.571 05:47:03 -- accel/accel.sh@21 -- # val= 00:06:41.571 05:47:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.571 05:47:03 -- accel/accel.sh@20 -- # IFS=: 00:06:41.571 05:47:03 -- accel/accel.sh@20 -- # read -r var val 00:06:41.571 05:47:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:41.571 05:47:03 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:41.571 05:47:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.571 00:06:41.571 real 0m2.632s 00:06:41.571 user 0m2.278s 00:06:41.571 sys 0m0.149s 00:06:41.571 05:47:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:41.571 ************************************ 00:06:41.571 END TEST accel_xor 00:06:41.571 ************************************ 00:06:41.571 05:47:03 -- common/autotest_common.sh@10 -- # set +x 00:06:41.571 05:47:03 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:41.571 05:47:03 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:41.571 05:47:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.571 05:47:03 -- common/autotest_common.sh@10 -- # set +x 00:06:41.571 ************************************ 00:06:41.571 START TEST accel_dif_verify 00:06:41.571 ************************************ 00:06:41.571 05:47:03 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:41.571 05:47:03 -- accel/accel.sh@16 -- # local accel_opc 00:06:41.571 05:47:03 -- accel/accel.sh@17 -- # local accel_module 00:06:41.571 05:47:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:41.571 05:47:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.571 05:47:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:41.571 05:47:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.571 05:47:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.571 05:47:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.571 05:47:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.571 05:47:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.571 05:47:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.571 05:47:03 -- accel/accel.sh@42 -- # jq -r . 00:06:41.830 [2024-12-15 05:47:03.223181] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:41.830 [2024-12-15 05:47:03.223305] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68401 ] 00:06:41.830 [2024-12-15 05:47:03.358747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.830 [2024-12-15 05:47:03.395393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.208 05:47:04 -- accel/accel.sh@18 -- # out=' 00:06:43.208 SPDK Configuration: 00:06:43.208 Core mask: 0x1 00:06:43.208 00:06:43.208 Accel Perf Configuration: 00:06:43.208 Workload Type: dif_verify 00:06:43.208 Vector size: 4096 bytes 00:06:43.208 Transfer size: 4096 bytes 00:06:43.208 Block size: 512 bytes 00:06:43.208 Metadata size: 8 bytes 00:06:43.208 Vector count 1 00:06:43.208 Module: software 00:06:43.208 Queue depth: 32 00:06:43.208 Allocate depth: 32 00:06:43.208 # threads/core: 1 00:06:43.208 Run time: 1 seconds 00:06:43.208 Verify: No 00:06:43.208 00:06:43.208 Running for 1 seconds... 00:06:43.208 00:06:43.208 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:43.208 ------------------------------------------------------------------------------------ 00:06:43.208 0,0 111712/s 443 MiB/s 0 0 00:06:43.208 ==================================================================================== 00:06:43.208 Total 111712/s 436 MiB/s 0 0' 00:06:43.208 05:47:04 -- accel/accel.sh@20 -- # IFS=: 00:06:43.208 05:47:04 -- accel/accel.sh@20 -- # read -r var val 00:06:43.208 05:47:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:43.208 05:47:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:43.208 05:47:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.208 05:47:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.208 05:47:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.208 05:47:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.208 05:47:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.208 05:47:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.208 05:47:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.208 05:47:04 -- accel/accel.sh@42 -- # jq -r . 00:06:43.208 [2024-12-15 05:47:04.537489] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:43.208 [2024-12-15 05:47:04.537583] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68421 ] 00:06:43.208 [2024-12-15 05:47:04.671387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.208 [2024-12-15 05:47:04.703072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.208 05:47:04 -- accel/accel.sh@21 -- # val= 00:06:43.208 05:47:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.208 05:47:04 -- accel/accel.sh@20 -- # IFS=: 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # read -r var val 00:06:43.209 05:47:04 -- accel/accel.sh@21 -- # val= 00:06:43.209 05:47:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # IFS=: 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # read -r var val 00:06:43.209 05:47:04 -- accel/accel.sh@21 -- # val=0x1 00:06:43.209 05:47:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # IFS=: 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # read -r var val 00:06:43.209 05:47:04 -- accel/accel.sh@21 -- # val= 00:06:43.209 05:47:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # IFS=: 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # read -r var val 00:06:43.209 05:47:04 -- accel/accel.sh@21 -- # val= 00:06:43.209 05:47:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # IFS=: 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # read -r var val 00:06:43.209 05:47:04 -- accel/accel.sh@21 -- # val=dif_verify 00:06:43.209 05:47:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.209 05:47:04 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # IFS=: 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # read -r var val 00:06:43.209 05:47:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:43.209 05:47:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # IFS=: 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # read -r var val 00:06:43.209 05:47:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:43.209 05:47:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # IFS=: 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # read -r var val 00:06:43.209 05:47:04 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:43.209 05:47:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # IFS=: 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # read -r var val 00:06:43.209 05:47:04 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:43.209 05:47:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # IFS=: 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # read -r var val 00:06:43.209 05:47:04 -- accel/accel.sh@21 -- # val= 00:06:43.209 05:47:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # IFS=: 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # read -r var val 00:06:43.209 05:47:04 -- accel/accel.sh@21 -- # val=software 00:06:43.209 05:47:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.209 05:47:04 -- accel/accel.sh@23 -- # accel_module=software 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # IFS=: 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # read -r var val 00:06:43.209 05:47:04 -- accel/accel.sh@21 -- # val=32 00:06:43.209 05:47:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # IFS=: 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # read -r var val 00:06:43.209 05:47:04 -- accel/accel.sh@21 -- # val=32 00:06:43.209 05:47:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # IFS=: 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # read -r var val 00:06:43.209 05:47:04 -- accel/accel.sh@21 -- # val=1 00:06:43.209 05:47:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # IFS=: 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # read -r var val 00:06:43.209 05:47:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:43.209 05:47:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # IFS=: 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # read -r var val 00:06:43.209 05:47:04 -- accel/accel.sh@21 -- # val=No 00:06:43.209 05:47:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # IFS=: 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # read -r var val 00:06:43.209 05:47:04 -- accel/accel.sh@21 -- # val= 00:06:43.209 05:47:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # IFS=: 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # read -r var val 00:06:43.209 05:47:04 -- accel/accel.sh@21 -- # val= 00:06:43.209 05:47:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # IFS=: 00:06:43.209 05:47:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.587 05:47:05 -- accel/accel.sh@21 -- # val= 00:06:44.587 05:47:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.587 05:47:05 -- accel/accel.sh@20 -- # IFS=: 00:06:44.587 05:47:05 -- accel/accel.sh@20 -- # read -r var val 00:06:44.587 05:47:05 -- accel/accel.sh@21 -- # val= 00:06:44.587 05:47:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.587 05:47:05 -- accel/accel.sh@20 -- # IFS=: 00:06:44.587 05:47:05 -- accel/accel.sh@20 -- # read -r var val 00:06:44.587 05:47:05 -- accel/accel.sh@21 -- # val= 00:06:44.587 05:47:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.587 05:47:05 -- accel/accel.sh@20 -- # IFS=: 00:06:44.587 05:47:05 -- accel/accel.sh@20 -- # read -r var val 00:06:44.587 05:47:05 -- accel/accel.sh@21 -- # val= 00:06:44.587 05:47:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.587 05:47:05 -- accel/accel.sh@20 -- # IFS=: 00:06:44.587 05:47:05 -- accel/accel.sh@20 -- # read -r var val 00:06:44.587 05:47:05 -- accel/accel.sh@21 -- # val= 00:06:44.587 05:47:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.587 05:47:05 -- accel/accel.sh@20 -- # IFS=: 00:06:44.587 05:47:05 -- accel/accel.sh@20 -- # read -r var val 00:06:44.587 05:47:05 -- accel/accel.sh@21 -- # val= 00:06:44.587 05:47:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.587 05:47:05 -- accel/accel.sh@20 -- # IFS=: 00:06:44.587 05:47:05 -- accel/accel.sh@20 -- # read -r var val 00:06:44.587 05:47:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:44.587 05:47:05 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:44.587 05:47:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.587 00:06:44.587 real 0m2.628s 00:06:44.587 user 0m2.282s 00:06:44.587 sys 0m0.145s 00:06:44.587 05:47:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:44.587 05:47:05 -- common/autotest_common.sh@10 -- # set +x 00:06:44.587 ************************************ 00:06:44.587 END TEST accel_dif_verify 00:06:44.587 ************************************ 00:06:44.587 05:47:05 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:44.587 05:47:05 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:44.587 05:47:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.587 05:47:05 -- common/autotest_common.sh@10 -- # set +x 00:06:44.587 ************************************ 00:06:44.587 START TEST accel_dif_generate 00:06:44.587 ************************************ 00:06:44.587 05:47:05 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:44.587 05:47:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.587 05:47:05 -- accel/accel.sh@17 -- # local accel_module 00:06:44.587 05:47:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:44.587 05:47:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:44.587 05:47:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.587 05:47:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.587 05:47:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.587 05:47:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.587 05:47:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.587 05:47:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.587 05:47:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.587 05:47:05 -- accel/accel.sh@42 -- # jq -r . 00:06:44.587 [2024-12-15 05:47:05.903429] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:44.587 [2024-12-15 05:47:05.903519] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68450 ] 00:06:44.587 [2024-12-15 05:47:06.031842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.587 [2024-12-15 05:47:06.069042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.966 05:47:07 -- accel/accel.sh@18 -- # out=' 00:06:45.966 SPDK Configuration: 00:06:45.966 Core mask: 0x1 00:06:45.966 00:06:45.966 Accel Perf Configuration: 00:06:45.966 Workload Type: dif_generate 00:06:45.966 Vector size: 4096 bytes 00:06:45.966 Transfer size: 4096 bytes 00:06:45.966 Block size: 512 bytes 00:06:45.966 Metadata size: 8 bytes 00:06:45.966 Vector count 1 00:06:45.966 Module: software 00:06:45.966 Queue depth: 32 00:06:45.966 Allocate depth: 32 00:06:45.966 # threads/core: 1 00:06:45.966 Run time: 1 seconds 00:06:45.966 Verify: No 00:06:45.966 00:06:45.966 Running for 1 seconds... 00:06:45.966 00:06:45.966 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:45.966 ------------------------------------------------------------------------------------ 00:06:45.966 0,0 144064/s 571 MiB/s 0 0 00:06:45.966 ==================================================================================== 00:06:45.966 Total 144064/s 562 MiB/s 0 0' 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # IFS=: 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # read -r var val 00:06:45.966 05:47:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:45.966 05:47:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:45.966 05:47:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.966 05:47:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.966 05:47:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.966 05:47:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.966 05:47:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.966 05:47:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.966 05:47:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.966 05:47:07 -- accel/accel.sh@42 -- # jq -r . 00:06:45.966 [2024-12-15 05:47:07.210191] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:45.966 [2024-12-15 05:47:07.210275] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68469 ] 00:06:45.966 [2024-12-15 05:47:07.337001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.966 [2024-12-15 05:47:07.367488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.966 05:47:07 -- accel/accel.sh@21 -- # val= 00:06:45.966 05:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # IFS=: 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # read -r var val 00:06:45.966 05:47:07 -- accel/accel.sh@21 -- # val= 00:06:45.966 05:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # IFS=: 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # read -r var val 00:06:45.966 05:47:07 -- accel/accel.sh@21 -- # val=0x1 00:06:45.966 05:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # IFS=: 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # read -r var val 00:06:45.966 05:47:07 -- accel/accel.sh@21 -- # val= 00:06:45.966 05:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # IFS=: 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # read -r var val 00:06:45.966 05:47:07 -- accel/accel.sh@21 -- # val= 00:06:45.966 05:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # IFS=: 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # read -r var val 00:06:45.966 05:47:07 -- accel/accel.sh@21 -- # val=dif_generate 00:06:45.966 05:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.966 05:47:07 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # IFS=: 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # read -r var val 00:06:45.966 05:47:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:45.966 05:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # IFS=: 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # read -r var val 00:06:45.966 05:47:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:45.966 05:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # IFS=: 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # read -r var val 00:06:45.966 05:47:07 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:45.966 05:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # IFS=: 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # read -r var val 00:06:45.966 05:47:07 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:45.966 05:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # IFS=: 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # read -r var val 00:06:45.966 05:47:07 -- accel/accel.sh@21 -- # val= 00:06:45.966 05:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # IFS=: 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # read -r var val 00:06:45.966 05:47:07 -- accel/accel.sh@21 -- # val=software 00:06:45.966 05:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.966 05:47:07 -- accel/accel.sh@23 -- # accel_module=software 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # IFS=: 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # read -r var val 00:06:45.966 05:47:07 -- accel/accel.sh@21 -- # val=32 00:06:45.966 05:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # IFS=: 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # read -r var val 00:06:45.966 05:47:07 -- accel/accel.sh@21 -- # val=32 00:06:45.966 05:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # IFS=: 00:06:45.966 05:47:07 -- accel/accel.sh@20 -- # read -r var val 00:06:45.966 05:47:07 -- accel/accel.sh@21 -- # val=1 00:06:45.967 05:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.967 05:47:07 -- accel/accel.sh@20 -- # IFS=: 00:06:45.967 05:47:07 -- accel/accel.sh@20 -- # read -r var val 00:06:45.967 05:47:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:45.967 05:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.967 05:47:07 -- accel/accel.sh@20 -- # IFS=: 00:06:45.967 05:47:07 -- accel/accel.sh@20 -- # read -r var val 00:06:45.967 05:47:07 -- accel/accel.sh@21 -- # val=No 00:06:45.967 05:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.967 05:47:07 -- accel/accel.sh@20 -- # IFS=: 00:06:45.967 05:47:07 -- accel/accel.sh@20 -- # read -r var val 00:06:45.967 05:47:07 -- accel/accel.sh@21 -- # val= 00:06:45.967 05:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.967 05:47:07 -- accel/accel.sh@20 -- # IFS=: 00:06:45.967 05:47:07 -- accel/accel.sh@20 -- # read -r var val 00:06:45.967 05:47:07 -- accel/accel.sh@21 -- # val= 00:06:45.967 05:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.967 05:47:07 -- accel/accel.sh@20 -- # IFS=: 00:06:45.967 05:47:07 -- accel/accel.sh@20 -- # read -r var val 00:06:46.905 05:47:08 -- accel/accel.sh@21 -- # val= 00:06:46.905 05:47:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.905 05:47:08 -- accel/accel.sh@20 -- # IFS=: 00:06:46.905 05:47:08 -- accel/accel.sh@20 -- # read -r var val 00:06:46.905 05:47:08 -- accel/accel.sh@21 -- # val= 00:06:46.905 05:47:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.905 05:47:08 -- accel/accel.sh@20 -- # IFS=: 00:06:46.905 05:47:08 -- accel/accel.sh@20 -- # read -r var val 00:06:46.905 05:47:08 -- accel/accel.sh@21 -- # val= 00:06:46.905 05:47:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.905 05:47:08 -- accel/accel.sh@20 -- # IFS=: 00:06:46.905 05:47:08 -- accel/accel.sh@20 -- # read -r var val 00:06:46.905 05:47:08 -- accel/accel.sh@21 -- # val= 00:06:46.905 05:47:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.905 05:47:08 -- accel/accel.sh@20 -- # IFS=: 00:06:46.905 05:47:08 -- accel/accel.sh@20 -- # read -r var val 00:06:46.905 05:47:08 -- accel/accel.sh@21 -- # val= 00:06:46.905 05:47:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.905 05:47:08 -- accel/accel.sh@20 -- # IFS=: 00:06:46.905 05:47:08 -- accel/accel.sh@20 -- # read -r var val 00:06:46.905 05:47:08 -- accel/accel.sh@21 -- # val= 00:06:46.905 05:47:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.905 05:47:08 -- accel/accel.sh@20 -- # IFS=: 00:06:46.905 05:47:08 -- accel/accel.sh@20 -- # read -r var val 00:06:46.905 05:47:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:46.905 05:47:08 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:46.905 05:47:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.905 00:06:46.905 real 0m2.609s 00:06:46.905 user 0m2.282s 00:06:46.905 sys 0m0.129s 00:06:46.905 05:47:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:46.905 ************************************ 00:06:46.905 END TEST accel_dif_generate 00:06:46.905 ************************************ 00:06:46.905 05:47:08 -- common/autotest_common.sh@10 -- # set +x 00:06:46.905 05:47:08 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:46.905 05:47:08 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:46.905 05:47:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:46.905 05:47:08 -- common/autotest_common.sh@10 -- # set +x 00:06:46.905 ************************************ 00:06:46.905 START TEST accel_dif_generate_copy 00:06:46.905 ************************************ 00:06:46.905 05:47:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:46.905 05:47:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.905 05:47:08 -- accel/accel.sh@17 -- # local accel_module 00:06:46.905 05:47:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:46.905 05:47:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:46.906 05:47:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.906 05:47:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.164 05:47:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.164 05:47:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.165 05:47:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.165 05:47:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.165 05:47:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.165 05:47:08 -- accel/accel.sh@42 -- # jq -r . 00:06:47.165 [2024-12-15 05:47:08.563450] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:47.165 [2024-12-15 05:47:08.563544] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68504 ] 00:06:47.165 [2024-12-15 05:47:08.694521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.165 [2024-12-15 05:47:08.724853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.543 05:47:09 -- accel/accel.sh@18 -- # out=' 00:06:48.543 SPDK Configuration: 00:06:48.543 Core mask: 0x1 00:06:48.543 00:06:48.543 Accel Perf Configuration: 00:06:48.543 Workload Type: dif_generate_copy 00:06:48.543 Vector size: 4096 bytes 00:06:48.543 Transfer size: 4096 bytes 00:06:48.543 Vector count 1 00:06:48.543 Module: software 00:06:48.543 Queue depth: 32 00:06:48.543 Allocate depth: 32 00:06:48.543 # threads/core: 1 00:06:48.543 Run time: 1 seconds 00:06:48.543 Verify: No 00:06:48.543 00:06:48.543 Running for 1 seconds... 00:06:48.543 00:06:48.543 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:48.543 ------------------------------------------------------------------------------------ 00:06:48.543 0,0 109824/s 435 MiB/s 0 0 00:06:48.543 ==================================================================================== 00:06:48.543 Total 109824/s 429 MiB/s 0 0' 00:06:48.543 05:47:09 -- accel/accel.sh@20 -- # IFS=: 00:06:48.543 05:47:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:48.543 05:47:09 -- accel/accel.sh@20 -- # read -r var val 00:06:48.543 05:47:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:48.543 05:47:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.543 05:47:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.543 05:47:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.543 05:47:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.543 05:47:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.543 05:47:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.543 05:47:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.543 05:47:09 -- accel/accel.sh@42 -- # jq -r . 00:06:48.543 [2024-12-15 05:47:09.866714] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:48.543 [2024-12-15 05:47:09.866804] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68518 ] 00:06:48.543 [2024-12-15 05:47:09.993588] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.543 [2024-12-15 05:47:10.026714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.543 05:47:10 -- accel/accel.sh@21 -- # val= 00:06:48.543 05:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.543 05:47:10 -- accel/accel.sh@21 -- # val= 00:06:48.543 05:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.543 05:47:10 -- accel/accel.sh@21 -- # val=0x1 00:06:48.543 05:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.543 05:47:10 -- accel/accel.sh@21 -- # val= 00:06:48.543 05:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.543 05:47:10 -- accel/accel.sh@21 -- # val= 00:06:48.543 05:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.543 05:47:10 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:48.543 05:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.543 05:47:10 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.543 05:47:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.543 05:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.543 05:47:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.543 05:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.543 05:47:10 -- accel/accel.sh@21 -- # val= 00:06:48.543 05:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.543 05:47:10 -- accel/accel.sh@21 -- # val=software 00:06:48.543 05:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.543 05:47:10 -- accel/accel.sh@23 -- # accel_module=software 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.543 05:47:10 -- accel/accel.sh@21 -- # val=32 00:06:48.543 05:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.543 05:47:10 -- accel/accel.sh@21 -- # val=32 00:06:48.543 05:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.543 05:47:10 -- accel/accel.sh@21 -- # val=1 00:06:48.543 05:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.543 05:47:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:48.543 05:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.543 05:47:10 -- accel/accel.sh@21 -- # val=No 00:06:48.543 05:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.543 05:47:10 -- accel/accel.sh@21 -- # val= 00:06:48.543 05:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.543 05:47:10 -- accel/accel.sh@21 -- # val= 00:06:48.543 05:47:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.543 05:47:10 -- accel/accel.sh@20 -- # read -r var val 00:06:49.921 05:47:11 -- accel/accel.sh@21 -- # val= 00:06:49.921 05:47:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.921 05:47:11 -- accel/accel.sh@20 -- # IFS=: 00:06:49.921 05:47:11 -- accel/accel.sh@20 -- # read -r var val 00:06:49.921 05:47:11 -- accel/accel.sh@21 -- # val= 00:06:49.921 05:47:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.921 05:47:11 -- accel/accel.sh@20 -- # IFS=: 00:06:49.921 05:47:11 -- accel/accel.sh@20 -- # read -r var val 00:06:49.921 05:47:11 -- accel/accel.sh@21 -- # val= 00:06:49.922 05:47:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.922 05:47:11 -- accel/accel.sh@20 -- # IFS=: 00:06:49.922 05:47:11 -- accel/accel.sh@20 -- # read -r var val 00:06:49.922 05:47:11 -- accel/accel.sh@21 -- # val= 00:06:49.922 05:47:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.922 05:47:11 -- accel/accel.sh@20 -- # IFS=: 00:06:49.922 05:47:11 -- accel/accel.sh@20 -- # read -r var val 00:06:49.922 05:47:11 -- accel/accel.sh@21 -- # val= 00:06:49.922 05:47:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.922 05:47:11 -- accel/accel.sh@20 -- # IFS=: 00:06:49.922 05:47:11 -- accel/accel.sh@20 -- # read -r var val 00:06:49.922 05:47:11 -- accel/accel.sh@21 -- # val= 00:06:49.922 05:47:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.922 05:47:11 -- accel/accel.sh@20 -- # IFS=: 00:06:49.922 05:47:11 -- accel/accel.sh@20 -- # read -r var val 00:06:49.922 05:47:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:49.922 05:47:11 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:49.922 05:47:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.922 00:06:49.922 real 0m2.605s 00:06:49.922 user 0m2.274s 00:06:49.922 sys 0m0.132s 00:06:49.922 05:47:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:49.922 05:47:11 -- common/autotest_common.sh@10 -- # set +x 00:06:49.922 ************************************ 00:06:49.922 END TEST accel_dif_generate_copy 00:06:49.922 ************************************ 00:06:49.922 05:47:11 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:49.922 05:47:11 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:49.922 05:47:11 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:49.922 05:47:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.922 05:47:11 -- common/autotest_common.sh@10 -- # set +x 00:06:49.922 ************************************ 00:06:49.922 START TEST accel_comp 00:06:49.922 ************************************ 00:06:49.922 05:47:11 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:49.922 05:47:11 -- accel/accel.sh@16 -- # local accel_opc 00:06:49.922 05:47:11 -- accel/accel.sh@17 -- # local accel_module 00:06:49.922 05:47:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:49.922 05:47:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:49.922 05:47:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.922 05:47:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.922 05:47:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.922 05:47:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.922 05:47:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.922 05:47:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.922 05:47:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.922 05:47:11 -- accel/accel.sh@42 -- # jq -r . 00:06:49.922 [2024-12-15 05:47:11.216820] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:49.922 [2024-12-15 05:47:11.216965] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68547 ] 00:06:49.922 [2024-12-15 05:47:11.347758] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.922 [2024-12-15 05:47:11.378691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.859 05:47:12 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:50.859 00:06:50.859 SPDK Configuration: 00:06:50.859 Core mask: 0x1 00:06:50.859 00:06:50.859 Accel Perf Configuration: 00:06:50.859 Workload Type: compress 00:06:50.859 Transfer size: 4096 bytes 00:06:50.859 Vector count 1 00:06:50.859 Module: software 00:06:50.859 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:50.859 Queue depth: 32 00:06:50.859 Allocate depth: 32 00:06:50.859 # threads/core: 1 00:06:50.859 Run time: 1 seconds 00:06:50.859 Verify: No 00:06:50.859 00:06:50.859 Running for 1 seconds... 00:06:50.859 00:06:50.859 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:50.859 ------------------------------------------------------------------------------------ 00:06:50.859 0,0 55968/s 233 MiB/s 0 0 00:06:50.859 ==================================================================================== 00:06:50.859 Total 55968/s 218 MiB/s 0 0' 00:06:50.859 05:47:12 -- accel/accel.sh@20 -- # IFS=: 00:06:50.859 05:47:12 -- accel/accel.sh@20 -- # read -r var val 00:06:50.859 05:47:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:51.118 05:47:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:51.118 05:47:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.118 05:47:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.118 05:47:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.118 05:47:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.118 05:47:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.118 05:47:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.118 05:47:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.118 05:47:12 -- accel/accel.sh@42 -- # jq -r . 00:06:51.118 [2024-12-15 05:47:12.520083] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:51.119 [2024-12-15 05:47:12.520191] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68566 ] 00:06:51.119 [2024-12-15 05:47:12.654829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.119 [2024-12-15 05:47:12.685450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.119 05:47:12 -- accel/accel.sh@21 -- # val= 00:06:51.119 05:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # read -r var val 00:06:51.119 05:47:12 -- accel/accel.sh@21 -- # val= 00:06:51.119 05:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # read -r var val 00:06:51.119 05:47:12 -- accel/accel.sh@21 -- # val= 00:06:51.119 05:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # read -r var val 00:06:51.119 05:47:12 -- accel/accel.sh@21 -- # val=0x1 00:06:51.119 05:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # read -r var val 00:06:51.119 05:47:12 -- accel/accel.sh@21 -- # val= 00:06:51.119 05:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # read -r var val 00:06:51.119 05:47:12 -- accel/accel.sh@21 -- # val= 00:06:51.119 05:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # read -r var val 00:06:51.119 05:47:12 -- accel/accel.sh@21 -- # val=compress 00:06:51.119 05:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 05:47:12 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # read -r var val 00:06:51.119 05:47:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:51.119 05:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # read -r var val 00:06:51.119 05:47:12 -- accel/accel.sh@21 -- # val= 00:06:51.119 05:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # read -r var val 00:06:51.119 05:47:12 -- accel/accel.sh@21 -- # val=software 00:06:51.119 05:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 05:47:12 -- accel/accel.sh@23 -- # accel_module=software 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # read -r var val 00:06:51.119 05:47:12 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:51.119 05:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # read -r var val 00:06:51.119 05:47:12 -- accel/accel.sh@21 -- # val=32 00:06:51.119 05:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # read -r var val 00:06:51.119 05:47:12 -- accel/accel.sh@21 -- # val=32 00:06:51.119 05:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # read -r var val 00:06:51.119 05:47:12 -- accel/accel.sh@21 -- # val=1 00:06:51.119 05:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # read -r var val 00:06:51.119 05:47:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:51.119 05:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # read -r var val 00:06:51.119 05:47:12 -- accel/accel.sh@21 -- # val=No 00:06:51.119 05:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # read -r var val 00:06:51.119 05:47:12 -- accel/accel.sh@21 -- # val= 00:06:51.119 05:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # read -r var val 00:06:51.119 05:47:12 -- accel/accel.sh@21 -- # val= 00:06:51.119 05:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 05:47:12 -- accel/accel.sh@20 -- # read -r var val 00:06:52.497 05:47:13 -- accel/accel.sh@21 -- # val= 00:06:52.497 05:47:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.497 05:47:13 -- accel/accel.sh@20 -- # IFS=: 00:06:52.497 05:47:13 -- accel/accel.sh@20 -- # read -r var val 00:06:52.497 05:47:13 -- accel/accel.sh@21 -- # val= 00:06:52.497 05:47:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.497 05:47:13 -- accel/accel.sh@20 -- # IFS=: 00:06:52.497 05:47:13 -- accel/accel.sh@20 -- # read -r var val 00:06:52.497 05:47:13 -- accel/accel.sh@21 -- # val= 00:06:52.497 05:47:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.497 05:47:13 -- accel/accel.sh@20 -- # IFS=: 00:06:52.497 05:47:13 -- accel/accel.sh@20 -- # read -r var val 00:06:52.497 05:47:13 -- accel/accel.sh@21 -- # val= 00:06:52.497 05:47:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.497 05:47:13 -- accel/accel.sh@20 -- # IFS=: 00:06:52.497 05:47:13 -- accel/accel.sh@20 -- # read -r var val 00:06:52.497 05:47:13 -- accel/accel.sh@21 -- # val= 00:06:52.497 05:47:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.497 05:47:13 -- accel/accel.sh@20 -- # IFS=: 00:06:52.497 05:47:13 -- accel/accel.sh@20 -- # read -r var val 00:06:52.497 05:47:13 -- accel/accel.sh@21 -- # val= 00:06:52.497 05:47:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.497 05:47:13 -- accel/accel.sh@20 -- # IFS=: 00:06:52.497 05:47:13 -- accel/accel.sh@20 -- # read -r var val 00:06:52.497 05:47:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:52.497 05:47:13 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:52.497 05:47:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.497 00:06:52.497 real 0m2.614s 00:06:52.497 user 0m2.270s 00:06:52.497 sys 0m0.148s 00:06:52.497 05:47:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.497 05:47:13 -- common/autotest_common.sh@10 -- # set +x 00:06:52.497 ************************************ 00:06:52.497 END TEST accel_comp 00:06:52.497 ************************************ 00:06:52.497 05:47:13 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:52.497 05:47:13 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:52.497 05:47:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.497 05:47:13 -- common/autotest_common.sh@10 -- # set +x 00:06:52.497 ************************************ 00:06:52.497 START TEST accel_decomp 00:06:52.497 ************************************ 00:06:52.497 05:47:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:52.497 05:47:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.497 05:47:13 -- accel/accel.sh@17 -- # local accel_module 00:06:52.497 05:47:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:52.497 05:47:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:52.497 05:47:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.497 05:47:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.497 05:47:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.497 05:47:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.497 05:47:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.497 05:47:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.497 05:47:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.497 05:47:13 -- accel/accel.sh@42 -- # jq -r . 00:06:52.497 [2024-12-15 05:47:13.884291] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:52.497 [2024-12-15 05:47:13.884991] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68601 ] 00:06:52.497 [2024-12-15 05:47:14.019795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.497 [2024-12-15 05:47:14.050461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.876 05:47:15 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:53.876 00:06:53.876 SPDK Configuration: 00:06:53.876 Core mask: 0x1 00:06:53.876 00:06:53.876 Accel Perf Configuration: 00:06:53.876 Workload Type: decompress 00:06:53.876 Transfer size: 4096 bytes 00:06:53.876 Vector count 1 00:06:53.876 Module: software 00:06:53.876 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:53.876 Queue depth: 32 00:06:53.876 Allocate depth: 32 00:06:53.876 # threads/core: 1 00:06:53.876 Run time: 1 seconds 00:06:53.876 Verify: Yes 00:06:53.876 00:06:53.876 Running for 1 seconds... 00:06:53.876 00:06:53.876 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:53.876 ------------------------------------------------------------------------------------ 00:06:53.876 0,0 78976/s 145 MiB/s 0 0 00:06:53.876 ==================================================================================== 00:06:53.876 Total 78976/s 308 MiB/s 0 0' 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.876 05:47:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.876 05:47:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:53.876 05:47:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.876 05:47:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.876 05:47:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.876 05:47:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.876 05:47:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.876 05:47:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.876 05:47:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.876 05:47:15 -- accel/accel.sh@42 -- # jq -r . 00:06:53.876 [2024-12-15 05:47:15.191949] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:53.876 [2024-12-15 05:47:15.192035] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68615 ] 00:06:53.876 [2024-12-15 05:47:15.326221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.876 [2024-12-15 05:47:15.356837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.876 05:47:15 -- accel/accel.sh@21 -- # val= 00:06:53.876 05:47:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.876 05:47:15 -- accel/accel.sh@21 -- # val= 00:06:53.876 05:47:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.876 05:47:15 -- accel/accel.sh@21 -- # val= 00:06:53.876 05:47:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.876 05:47:15 -- accel/accel.sh@21 -- # val=0x1 00:06:53.876 05:47:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.876 05:47:15 -- accel/accel.sh@21 -- # val= 00:06:53.876 05:47:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.876 05:47:15 -- accel/accel.sh@21 -- # val= 00:06:53.876 05:47:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.876 05:47:15 -- accel/accel.sh@21 -- # val=decompress 00:06:53.876 05:47:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.876 05:47:15 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.876 05:47:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:53.876 05:47:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.876 05:47:15 -- accel/accel.sh@21 -- # val= 00:06:53.876 05:47:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.876 05:47:15 -- accel/accel.sh@21 -- # val=software 00:06:53.876 05:47:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.876 05:47:15 -- accel/accel.sh@23 -- # accel_module=software 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.876 05:47:15 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:53.876 05:47:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.876 05:47:15 -- accel/accel.sh@21 -- # val=32 00:06:53.876 05:47:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.876 05:47:15 -- accel/accel.sh@21 -- # val=32 00:06:53.876 05:47:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.876 05:47:15 -- accel/accel.sh@21 -- # val=1 00:06:53.876 05:47:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.876 05:47:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:53.876 05:47:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.876 05:47:15 -- accel/accel.sh@21 -- # val=Yes 00:06:53.876 05:47:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.876 05:47:15 -- accel/accel.sh@21 -- # val= 00:06:53.876 05:47:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.876 05:47:15 -- accel/accel.sh@21 -- # val= 00:06:53.876 05:47:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.876 05:47:15 -- accel/accel.sh@20 -- # read -r var val 00:06:54.838 05:47:16 -- accel/accel.sh@21 -- # val= 00:06:54.838 05:47:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.838 05:47:16 -- accel/accel.sh@20 -- # IFS=: 00:06:54.838 05:47:16 -- accel/accel.sh@20 -- # read -r var val 00:06:54.838 05:47:16 -- accel/accel.sh@21 -- # val= 00:06:54.838 05:47:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.838 05:47:16 -- accel/accel.sh@20 -- # IFS=: 00:06:54.838 05:47:16 -- accel/accel.sh@20 -- # read -r var val 00:06:54.838 05:47:16 -- accel/accel.sh@21 -- # val= 00:06:54.838 05:47:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.838 05:47:16 -- accel/accel.sh@20 -- # IFS=: 00:06:54.838 05:47:16 -- accel/accel.sh@20 -- # read -r var val 00:06:54.838 05:47:16 -- accel/accel.sh@21 -- # val= 00:06:55.098 05:47:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.098 05:47:16 -- accel/accel.sh@20 -- # IFS=: 00:06:55.098 05:47:16 -- accel/accel.sh@20 -- # read -r var val 00:06:55.098 05:47:16 -- accel/accel.sh@21 -- # val= 00:06:55.098 05:47:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.098 05:47:16 -- accel/accel.sh@20 -- # IFS=: 00:06:55.098 05:47:16 -- accel/accel.sh@20 -- # read -r var val 00:06:55.098 05:47:16 -- accel/accel.sh@21 -- # val= 00:06:55.098 05:47:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.098 05:47:16 -- accel/accel.sh@20 -- # IFS=: 00:06:55.098 05:47:16 -- accel/accel.sh@20 -- # read -r var val 00:06:55.098 05:47:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:55.098 05:47:16 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:55.098 05:47:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.098 00:06:55.098 real 0m2.619s 00:06:55.098 user 0m2.279s 00:06:55.098 sys 0m0.135s 00:06:55.098 05:47:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:55.098 05:47:16 -- common/autotest_common.sh@10 -- # set +x 00:06:55.098 ************************************ 00:06:55.098 END TEST accel_decomp 00:06:55.098 ************************************ 00:06:55.098 05:47:16 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:55.098 05:47:16 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:55.098 05:47:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.098 05:47:16 -- common/autotest_common.sh@10 -- # set +x 00:06:55.098 ************************************ 00:06:55.098 START TEST accel_decmop_full 00:06:55.098 ************************************ 00:06:55.098 05:47:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:55.098 05:47:16 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.098 05:47:16 -- accel/accel.sh@17 -- # local accel_module 00:06:55.098 05:47:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:55.098 05:47:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:55.098 05:47:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.098 05:47:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.098 05:47:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.098 05:47:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.098 05:47:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.098 05:47:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.098 05:47:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.098 05:47:16 -- accel/accel.sh@42 -- # jq -r . 00:06:55.098 [2024-12-15 05:47:16.559328] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:55.098 [2024-12-15 05:47:16.559563] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68646 ] 00:06:55.098 [2024-12-15 05:47:16.696054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.098 [2024-12-15 05:47:16.726811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.477 05:47:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:56.477 00:06:56.477 SPDK Configuration: 00:06:56.477 Core mask: 0x1 00:06:56.477 00:06:56.477 Accel Perf Configuration: 00:06:56.477 Workload Type: decompress 00:06:56.477 Transfer size: 111250 bytes 00:06:56.477 Vector count 1 00:06:56.477 Module: software 00:06:56.477 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:56.477 Queue depth: 32 00:06:56.477 Allocate depth: 32 00:06:56.477 # threads/core: 1 00:06:56.477 Run time: 1 seconds 00:06:56.477 Verify: Yes 00:06:56.477 00:06:56.477 Running for 1 seconds... 00:06:56.477 00:06:56.477 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:56.477 ------------------------------------------------------------------------------------ 00:06:56.477 0,0 5312/s 219 MiB/s 0 0 00:06:56.477 ==================================================================================== 00:06:56.477 Total 5312/s 563 MiB/s 0 0' 00:06:56.477 05:47:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:56.477 05:47:17 -- accel/accel.sh@20 -- # IFS=: 00:06:56.477 05:47:17 -- accel/accel.sh@20 -- # read -r var val 00:06:56.477 05:47:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:56.477 05:47:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.477 05:47:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.477 05:47:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.477 05:47:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.477 05:47:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.477 05:47:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.477 05:47:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.477 05:47:17 -- accel/accel.sh@42 -- # jq -r . 00:06:56.477 [2024-12-15 05:47:17.875308] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:56.477 [2024-12-15 05:47:17.875399] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68669 ] 00:06:56.477 [2024-12-15 05:47:18.003095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.477 [2024-12-15 05:47:18.033550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.477 05:47:18 -- accel/accel.sh@21 -- # val= 00:06:56.477 05:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.477 05:47:18 -- accel/accel.sh@20 -- # IFS=: 00:06:56.477 05:47:18 -- accel/accel.sh@20 -- # read -r var val 00:06:56.477 05:47:18 -- accel/accel.sh@21 -- # val= 00:06:56.477 05:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.477 05:47:18 -- accel/accel.sh@20 -- # IFS=: 00:06:56.477 05:47:18 -- accel/accel.sh@20 -- # read -r var val 00:06:56.477 05:47:18 -- accel/accel.sh@21 -- # val= 00:06:56.477 05:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.477 05:47:18 -- accel/accel.sh@20 -- # IFS=: 00:06:56.477 05:47:18 -- accel/accel.sh@20 -- # read -r var val 00:06:56.477 05:47:18 -- accel/accel.sh@21 -- # val=0x1 00:06:56.477 05:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.477 05:47:18 -- accel/accel.sh@20 -- # IFS=: 00:06:56.477 05:47:18 -- accel/accel.sh@20 -- # read -r var val 00:06:56.477 05:47:18 -- accel/accel.sh@21 -- # val= 00:06:56.477 05:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.477 05:47:18 -- accel/accel.sh@20 -- # IFS=: 00:06:56.477 05:47:18 -- accel/accel.sh@20 -- # read -r var val 00:06:56.477 05:47:18 -- accel/accel.sh@21 -- # val= 00:06:56.477 05:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.477 05:47:18 -- accel/accel.sh@20 -- # IFS=: 00:06:56.477 05:47:18 -- accel/accel.sh@20 -- # read -r var val 00:06:56.477 05:47:18 -- accel/accel.sh@21 -- # val=decompress 00:06:56.477 05:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.477 05:47:18 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:56.477 05:47:18 -- accel/accel.sh@20 -- # IFS=: 00:06:56.478 05:47:18 -- accel/accel.sh@20 -- # read -r var val 00:06:56.478 05:47:18 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:56.478 05:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.478 05:47:18 -- accel/accel.sh@20 -- # IFS=: 00:06:56.478 05:47:18 -- accel/accel.sh@20 -- # read -r var val 00:06:56.478 05:47:18 -- accel/accel.sh@21 -- # val= 00:06:56.478 05:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.478 05:47:18 -- accel/accel.sh@20 -- # IFS=: 00:06:56.478 05:47:18 -- accel/accel.sh@20 -- # read -r var val 00:06:56.478 05:47:18 -- accel/accel.sh@21 -- # val=software 00:06:56.478 05:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.478 05:47:18 -- accel/accel.sh@23 -- # accel_module=software 00:06:56.478 05:47:18 -- accel/accel.sh@20 -- # IFS=: 00:06:56.478 05:47:18 -- accel/accel.sh@20 -- # read -r var val 00:06:56.478 05:47:18 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:56.478 05:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.478 05:47:18 -- accel/accel.sh@20 -- # IFS=: 00:06:56.478 05:47:18 -- accel/accel.sh@20 -- # read -r var val 00:06:56.478 05:47:18 -- accel/accel.sh@21 -- # val=32 00:06:56.478 05:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.478 05:47:18 -- accel/accel.sh@20 -- # IFS=: 00:06:56.478 05:47:18 -- accel/accel.sh@20 -- # read -r var val 00:06:56.478 05:47:18 -- accel/accel.sh@21 -- # val=32 00:06:56.478 05:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.478 05:47:18 -- accel/accel.sh@20 -- # IFS=: 00:06:56.478 05:47:18 -- accel/accel.sh@20 -- # read -r var val 00:06:56.478 05:47:18 -- accel/accel.sh@21 -- # val=1 00:06:56.478 05:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.478 05:47:18 -- accel/accel.sh@20 -- # IFS=: 00:06:56.478 05:47:18 -- accel/accel.sh@20 -- # read -r var val 00:06:56.478 05:47:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:56.478 05:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.478 05:47:18 -- accel/accel.sh@20 -- # IFS=: 00:06:56.478 05:47:18 -- accel/accel.sh@20 -- # read -r var val 00:06:56.478 05:47:18 -- accel/accel.sh@21 -- # val=Yes 00:06:56.478 05:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.478 05:47:18 -- accel/accel.sh@20 -- # IFS=: 00:06:56.478 05:47:18 -- accel/accel.sh@20 -- # read -r var val 00:06:56.478 05:47:18 -- accel/accel.sh@21 -- # val= 00:06:56.478 05:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.478 05:47:18 -- accel/accel.sh@20 -- # IFS=: 00:06:56.478 05:47:18 -- accel/accel.sh@20 -- # read -r var val 00:06:56.478 05:47:18 -- accel/accel.sh@21 -- # val= 00:06:56.478 05:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.478 05:47:18 -- accel/accel.sh@20 -- # IFS=: 00:06:56.478 05:47:18 -- accel/accel.sh@20 -- # read -r var val 00:06:57.855 05:47:19 -- accel/accel.sh@21 -- # val= 00:06:57.855 05:47:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.855 05:47:19 -- accel/accel.sh@20 -- # IFS=: 00:06:57.855 05:47:19 -- accel/accel.sh@20 -- # read -r var val 00:06:57.855 05:47:19 -- accel/accel.sh@21 -- # val= 00:06:57.855 05:47:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.855 05:47:19 -- accel/accel.sh@20 -- # IFS=: 00:06:57.855 05:47:19 -- accel/accel.sh@20 -- # read -r var val 00:06:57.855 05:47:19 -- accel/accel.sh@21 -- # val= 00:06:57.855 05:47:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.855 05:47:19 -- accel/accel.sh@20 -- # IFS=: 00:06:57.855 05:47:19 -- accel/accel.sh@20 -- # read -r var val 00:06:57.855 05:47:19 -- accel/accel.sh@21 -- # val= 00:06:57.855 05:47:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.855 05:47:19 -- accel/accel.sh@20 -- # IFS=: 00:06:57.855 05:47:19 -- accel/accel.sh@20 -- # read -r var val 00:06:57.855 05:47:19 -- accel/accel.sh@21 -- # val= 00:06:57.855 05:47:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.855 05:47:19 -- accel/accel.sh@20 -- # IFS=: 00:06:57.855 05:47:19 -- accel/accel.sh@20 -- # read -r var val 00:06:57.855 05:47:19 -- accel/accel.sh@21 -- # val= 00:06:57.855 05:47:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.855 05:47:19 -- accel/accel.sh@20 -- # IFS=: 00:06:57.855 05:47:19 -- accel/accel.sh@20 -- # read -r var val 00:06:57.855 05:47:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:57.855 05:47:19 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:57.855 05:47:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.855 00:06:57.855 real 0m2.625s 00:06:57.855 user 0m2.287s 00:06:57.855 sys 0m0.139s 00:06:57.855 05:47:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:57.855 ************************************ 00:06:57.855 END TEST accel_decmop_full 00:06:57.855 ************************************ 00:06:57.855 05:47:19 -- common/autotest_common.sh@10 -- # set +x 00:06:57.855 05:47:19 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:57.855 05:47:19 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:57.856 05:47:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.856 05:47:19 -- common/autotest_common.sh@10 -- # set +x 00:06:57.856 ************************************ 00:06:57.856 START TEST accel_decomp_mcore 00:06:57.856 ************************************ 00:06:57.856 05:47:19 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:57.856 05:47:19 -- accel/accel.sh@16 -- # local accel_opc 00:06:57.856 05:47:19 -- accel/accel.sh@17 -- # local accel_module 00:06:57.856 05:47:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:57.856 05:47:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.856 05:47:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:57.856 05:47:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.856 05:47:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.856 05:47:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.856 05:47:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.856 05:47:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.856 05:47:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.856 05:47:19 -- accel/accel.sh@42 -- # jq -r . 00:06:57.856 [2024-12-15 05:47:19.232649] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:57.856 [2024-12-15 05:47:19.232739] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68698 ] 00:06:57.856 [2024-12-15 05:47:19.368044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:57.856 [2024-12-15 05:47:19.400787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.856 [2024-12-15 05:47:19.400927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.856 [2024-12-15 05:47:19.401054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:57.856 [2024-12-15 05:47:19.401058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.235 05:47:20 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:59.235 00:06:59.235 SPDK Configuration: 00:06:59.235 Core mask: 0xf 00:06:59.235 00:06:59.235 Accel Perf Configuration: 00:06:59.235 Workload Type: decompress 00:06:59.235 Transfer size: 4096 bytes 00:06:59.235 Vector count 1 00:06:59.235 Module: software 00:06:59.235 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:59.235 Queue depth: 32 00:06:59.235 Allocate depth: 32 00:06:59.235 # threads/core: 1 00:06:59.235 Run time: 1 seconds 00:06:59.235 Verify: Yes 00:06:59.235 00:06:59.235 Running for 1 seconds... 00:06:59.235 00:06:59.235 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:59.235 ------------------------------------------------------------------------------------ 00:06:59.235 0,0 65184/s 120 MiB/s 0 0 00:06:59.235 3,0 62688/s 115 MiB/s 0 0 00:06:59.235 2,0 62304/s 114 MiB/s 0 0 00:06:59.235 1,0 62080/s 114 MiB/s 0 0 00:06:59.235 ==================================================================================== 00:06:59.235 Total 252256/s 985 MiB/s 0 0' 00:06:59.235 05:47:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.235 05:47:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:59.235 05:47:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.235 05:47:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.235 05:47:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:59.235 05:47:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.235 05:47:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.235 05:47:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.235 05:47:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.235 05:47:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.235 05:47:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.235 05:47:20 -- accel/accel.sh@42 -- # jq -r . 00:06:59.235 [2024-12-15 05:47:20.553560] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:59.235 [2024-12-15 05:47:20.553834] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68715 ] 00:06:59.235 [2024-12-15 05:47:20.681143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:59.235 [2024-12-15 05:47:20.716579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.235 [2024-12-15 05:47:20.716730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.235 [2024-12-15 05:47:20.716816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:59.235 [2024-12-15 05:47:20.716986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.235 05:47:20 -- accel/accel.sh@21 -- # val= 00:06:59.235 05:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.235 05:47:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.235 05:47:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.235 05:47:20 -- accel/accel.sh@21 -- # val= 00:06:59.235 05:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.235 05:47:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.235 05:47:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.235 05:47:20 -- accel/accel.sh@21 -- # val= 00:06:59.235 05:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.235 05:47:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.235 05:47:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.235 05:47:20 -- accel/accel.sh@21 -- # val=0xf 00:06:59.236 05:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.236 05:47:20 -- accel/accel.sh@21 -- # val= 00:06:59.236 05:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.236 05:47:20 -- accel/accel.sh@21 -- # val= 00:06:59.236 05:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.236 05:47:20 -- accel/accel.sh@21 -- # val=decompress 00:06:59.236 05:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.236 05:47:20 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.236 05:47:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.236 05:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.236 05:47:20 -- accel/accel.sh@21 -- # val= 00:06:59.236 05:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.236 05:47:20 -- accel/accel.sh@21 -- # val=software 00:06:59.236 05:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.236 05:47:20 -- accel/accel.sh@23 -- # accel_module=software 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.236 05:47:20 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:59.236 05:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.236 05:47:20 -- accel/accel.sh@21 -- # val=32 00:06:59.236 05:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.236 05:47:20 -- accel/accel.sh@21 -- # val=32 00:06:59.236 05:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.236 05:47:20 -- accel/accel.sh@21 -- # val=1 00:06:59.236 05:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.236 05:47:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:59.236 05:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.236 05:47:20 -- accel/accel.sh@21 -- # val=Yes 00:06:59.236 05:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.236 05:47:20 -- accel/accel.sh@21 -- # val= 00:06:59.236 05:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.236 05:47:20 -- accel/accel.sh@21 -- # val= 00:06:59.236 05:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.236 05:47:20 -- accel/accel.sh@20 -- # read -r var val 00:07:00.614 05:47:21 -- accel/accel.sh@21 -- # val= 00:07:00.614 05:47:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.614 05:47:21 -- accel/accel.sh@20 -- # IFS=: 00:07:00.614 05:47:21 -- accel/accel.sh@20 -- # read -r var val 00:07:00.614 05:47:21 -- accel/accel.sh@21 -- # val= 00:07:00.614 05:47:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.614 05:47:21 -- accel/accel.sh@20 -- # IFS=: 00:07:00.614 05:47:21 -- accel/accel.sh@20 -- # read -r var val 00:07:00.614 05:47:21 -- accel/accel.sh@21 -- # val= 00:07:00.614 05:47:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.614 05:47:21 -- accel/accel.sh@20 -- # IFS=: 00:07:00.614 05:47:21 -- accel/accel.sh@20 -- # read -r var val 00:07:00.614 05:47:21 -- accel/accel.sh@21 -- # val= 00:07:00.614 05:47:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.614 05:47:21 -- accel/accel.sh@20 -- # IFS=: 00:07:00.614 05:47:21 -- accel/accel.sh@20 -- # read -r var val 00:07:00.614 05:47:21 -- accel/accel.sh@21 -- # val= 00:07:00.614 05:47:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.614 05:47:21 -- accel/accel.sh@20 -- # IFS=: 00:07:00.614 05:47:21 -- accel/accel.sh@20 -- # read -r var val 00:07:00.614 05:47:21 -- accel/accel.sh@21 -- # val= 00:07:00.614 05:47:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.614 05:47:21 -- accel/accel.sh@20 -- # IFS=: 00:07:00.614 05:47:21 -- accel/accel.sh@20 -- # read -r var val 00:07:00.614 05:47:21 -- accel/accel.sh@21 -- # val= 00:07:00.614 05:47:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.614 05:47:21 -- accel/accel.sh@20 -- # IFS=: 00:07:00.614 05:47:21 -- accel/accel.sh@20 -- # read -r var val 00:07:00.614 05:47:21 -- accel/accel.sh@21 -- # val= 00:07:00.614 05:47:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.614 05:47:21 -- accel/accel.sh@20 -- # IFS=: 00:07:00.614 05:47:21 -- accel/accel.sh@20 -- # read -r var val 00:07:00.614 05:47:21 -- accel/accel.sh@21 -- # val= 00:07:00.614 05:47:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.614 05:47:21 -- accel/accel.sh@20 -- # IFS=: 00:07:00.614 05:47:21 -- accel/accel.sh@20 -- # read -r var val 00:07:00.614 05:47:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:00.614 05:47:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:00.614 05:47:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.614 00:07:00.614 real 0m2.649s 00:07:00.614 user 0m8.740s 00:07:00.614 sys 0m0.167s 00:07:00.614 05:47:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.614 ************************************ 00:07:00.614 END TEST accel_decomp_mcore 00:07:00.614 ************************************ 00:07:00.614 05:47:21 -- common/autotest_common.sh@10 -- # set +x 00:07:00.614 05:47:21 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:00.614 05:47:21 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:00.614 05:47:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.614 05:47:21 -- common/autotest_common.sh@10 -- # set +x 00:07:00.614 ************************************ 00:07:00.614 START TEST accel_decomp_full_mcore 00:07:00.614 ************************************ 00:07:00.614 05:47:21 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:00.614 05:47:21 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.614 05:47:21 -- accel/accel.sh@17 -- # local accel_module 00:07:00.614 05:47:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:00.614 05:47:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:00.614 05:47:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.614 05:47:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.614 05:47:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.614 05:47:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.614 05:47:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.614 05:47:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.614 05:47:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.614 05:47:21 -- accel/accel.sh@42 -- # jq -r . 00:07:00.614 [2024-12-15 05:47:21.932313] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:00.614 [2024-12-15 05:47:21.932417] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68758 ] 00:07:00.614 [2024-12-15 05:47:22.068579] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.614 [2024-12-15 05:47:22.101358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.614 [2024-12-15 05:47:22.101458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.614 [2024-12-15 05:47:22.101579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.614 [2024-12-15 05:47:22.101583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.996 05:47:23 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:01.996 00:07:01.996 SPDK Configuration: 00:07:01.996 Core mask: 0xf 00:07:01.996 00:07:01.996 Accel Perf Configuration: 00:07:01.996 Workload Type: decompress 00:07:01.996 Transfer size: 111250 bytes 00:07:01.996 Vector count 1 00:07:01.996 Module: software 00:07:01.996 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:01.996 Queue depth: 32 00:07:01.996 Allocate depth: 32 00:07:01.996 # threads/core: 1 00:07:01.996 Run time: 1 seconds 00:07:01.996 Verify: Yes 00:07:01.996 00:07:01.996 Running for 1 seconds... 00:07:01.996 00:07:01.996 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:01.996 ------------------------------------------------------------------------------------ 00:07:01.996 0,0 4864/s 200 MiB/s 0 0 00:07:01.996 3,0 4864/s 200 MiB/s 0 0 00:07:01.996 2,0 4864/s 200 MiB/s 0 0 00:07:01.996 1,0 4864/s 200 MiB/s 0 0 00:07:01.996 ==================================================================================== 00:07:01.996 Total 19456/s 2064 MiB/s 0 0' 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:01.996 05:47:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:01.996 05:47:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.996 05:47:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:01.996 05:47:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.996 05:47:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.996 05:47:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.996 05:47:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.996 05:47:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.996 05:47:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.996 05:47:23 -- accel/accel.sh@42 -- # jq -r . 00:07:01.996 [2024-12-15 05:47:23.267952] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:01.996 [2024-12-15 05:47:23.268043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68775 ] 00:07:01.996 [2024-12-15 05:47:23.403075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:01.996 [2024-12-15 05:47:23.435527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.996 [2024-12-15 05:47:23.435664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.996 [2024-12-15 05:47:23.435747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.996 [2024-12-15 05:47:23.436045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.996 05:47:23 -- accel/accel.sh@21 -- # val= 00:07:01.996 05:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:01.996 05:47:23 -- accel/accel.sh@21 -- # val= 00:07:01.996 05:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:01.996 05:47:23 -- accel/accel.sh@21 -- # val= 00:07:01.996 05:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:01.996 05:47:23 -- accel/accel.sh@21 -- # val=0xf 00:07:01.996 05:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:01.996 05:47:23 -- accel/accel.sh@21 -- # val= 00:07:01.996 05:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:01.996 05:47:23 -- accel/accel.sh@21 -- # val= 00:07:01.996 05:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:01.996 05:47:23 -- accel/accel.sh@21 -- # val=decompress 00:07:01.996 05:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.996 05:47:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:01.996 05:47:23 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:01.996 05:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:01.996 05:47:23 -- accel/accel.sh@21 -- # val= 00:07:01.996 05:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:01.996 05:47:23 -- accel/accel.sh@21 -- # val=software 00:07:01.996 05:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.996 05:47:23 -- accel/accel.sh@23 -- # accel_module=software 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:01.996 05:47:23 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:01.996 05:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:01.996 05:47:23 -- accel/accel.sh@21 -- # val=32 00:07:01.996 05:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:01.996 05:47:23 -- accel/accel.sh@21 -- # val=32 00:07:01.996 05:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:01.996 05:47:23 -- accel/accel.sh@21 -- # val=1 00:07:01.996 05:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:01.996 05:47:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:01.996 05:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:01.996 05:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:01.996 05:47:23 -- accel/accel.sh@21 -- # val=Yes 00:07:01.996 05:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.997 05:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:01.997 05:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:01.997 05:47:23 -- accel/accel.sh@21 -- # val= 00:07:01.997 05:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.997 05:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:01.997 05:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:01.997 05:47:23 -- accel/accel.sh@21 -- # val= 00:07:01.997 05:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.997 05:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:01.997 05:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:02.934 05:47:24 -- accel/accel.sh@21 -- # val= 00:07:02.934 05:47:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.934 05:47:24 -- accel/accel.sh@20 -- # IFS=: 00:07:02.934 05:47:24 -- accel/accel.sh@20 -- # read -r var val 00:07:03.193 05:47:24 -- accel/accel.sh@21 -- # val= 00:07:03.193 05:47:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.193 05:47:24 -- accel/accel.sh@20 -- # IFS=: 00:07:03.193 05:47:24 -- accel/accel.sh@20 -- # read -r var val 00:07:03.193 05:47:24 -- accel/accel.sh@21 -- # val= 00:07:03.193 05:47:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.193 05:47:24 -- accel/accel.sh@20 -- # IFS=: 00:07:03.193 05:47:24 -- accel/accel.sh@20 -- # read -r var val 00:07:03.193 05:47:24 -- accel/accel.sh@21 -- # val= 00:07:03.193 05:47:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.193 05:47:24 -- accel/accel.sh@20 -- # IFS=: 00:07:03.194 05:47:24 -- accel/accel.sh@20 -- # read -r var val 00:07:03.194 05:47:24 -- accel/accel.sh@21 -- # val= 00:07:03.194 05:47:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.194 05:47:24 -- accel/accel.sh@20 -- # IFS=: 00:07:03.194 05:47:24 -- accel/accel.sh@20 -- # read -r var val 00:07:03.194 05:47:24 -- accel/accel.sh@21 -- # val= 00:07:03.194 05:47:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.194 05:47:24 -- accel/accel.sh@20 -- # IFS=: 00:07:03.194 05:47:24 -- accel/accel.sh@20 -- # read -r var val 00:07:03.194 05:47:24 -- accel/accel.sh@21 -- # val= 00:07:03.194 05:47:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.194 05:47:24 -- accel/accel.sh@20 -- # IFS=: 00:07:03.194 05:47:24 -- accel/accel.sh@20 -- # read -r var val 00:07:03.194 05:47:24 -- accel/accel.sh@21 -- # val= 00:07:03.194 05:47:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.194 05:47:24 -- accel/accel.sh@20 -- # IFS=: 00:07:03.194 05:47:24 -- accel/accel.sh@20 -- # read -r var val 00:07:03.194 05:47:24 -- accel/accel.sh@21 -- # val= 00:07:03.194 05:47:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.194 05:47:24 -- accel/accel.sh@20 -- # IFS=: 00:07:03.194 05:47:24 -- accel/accel.sh@20 -- # read -r var val 00:07:03.194 05:47:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.194 05:47:24 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:03.194 05:47:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.194 00:07:03.194 real 0m2.676s 00:07:03.194 user 0m8.825s 00:07:03.194 sys 0m0.164s 00:07:03.194 05:47:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.194 05:47:24 -- common/autotest_common.sh@10 -- # set +x 00:07:03.194 ************************************ 00:07:03.194 END TEST accel_decomp_full_mcore 00:07:03.194 ************************************ 00:07:03.194 05:47:24 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:03.194 05:47:24 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:03.194 05:47:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.194 05:47:24 -- common/autotest_common.sh@10 -- # set +x 00:07:03.194 ************************************ 00:07:03.194 START TEST accel_decomp_mthread 00:07:03.194 ************************************ 00:07:03.194 05:47:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:03.194 05:47:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.194 05:47:24 -- accel/accel.sh@17 -- # local accel_module 00:07:03.194 05:47:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:03.194 05:47:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:03.194 05:47:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.194 05:47:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.194 05:47:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.194 05:47:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.194 05:47:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.194 05:47:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.194 05:47:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.194 05:47:24 -- accel/accel.sh@42 -- # jq -r . 00:07:03.194 [2024-12-15 05:47:24.657840] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:03.194 [2024-12-15 05:47:24.657954] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68807 ] 00:07:03.194 [2024-12-15 05:47:24.795047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.194 [2024-12-15 05:47:24.826803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.570 05:47:25 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:04.570 00:07:04.570 SPDK Configuration: 00:07:04.570 Core mask: 0x1 00:07:04.570 00:07:04.570 Accel Perf Configuration: 00:07:04.570 Workload Type: decompress 00:07:04.570 Transfer size: 4096 bytes 00:07:04.570 Vector count 1 00:07:04.570 Module: software 00:07:04.570 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:04.570 Queue depth: 32 00:07:04.570 Allocate depth: 32 00:07:04.570 # threads/core: 2 00:07:04.570 Run time: 1 seconds 00:07:04.570 Verify: Yes 00:07:04.570 00:07:04.570 Running for 1 seconds... 00:07:04.570 00:07:04.570 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:04.570 ------------------------------------------------------------------------------------ 00:07:04.570 0,1 40064/s 73 MiB/s 0 0 00:07:04.570 0,0 39936/s 73 MiB/s 0 0 00:07:04.570 ==================================================================================== 00:07:04.570 Total 80000/s 312 MiB/s 0 0' 00:07:04.570 05:47:25 -- accel/accel.sh@20 -- # IFS=: 00:07:04.570 05:47:25 -- accel/accel.sh@20 -- # read -r var val 00:07:04.570 05:47:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:04.570 05:47:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.570 05:47:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:04.570 05:47:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.570 05:47:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.570 05:47:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.570 05:47:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.570 05:47:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.570 05:47:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.570 05:47:25 -- accel/accel.sh@42 -- # jq -r . 00:07:04.570 [2024-12-15 05:47:25.973450] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:04.570 [2024-12-15 05:47:25.973539] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68827 ] 00:07:04.570 [2024-12-15 05:47:26.111443] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.570 [2024-12-15 05:47:26.142077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.570 05:47:26 -- accel/accel.sh@21 -- # val= 00:07:04.570 05:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.570 05:47:26 -- accel/accel.sh@20 -- # IFS=: 00:07:04.570 05:47:26 -- accel/accel.sh@20 -- # read -r var val 00:07:04.570 05:47:26 -- accel/accel.sh@21 -- # val= 00:07:04.570 05:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.570 05:47:26 -- accel/accel.sh@20 -- # IFS=: 00:07:04.570 05:47:26 -- accel/accel.sh@20 -- # read -r var val 00:07:04.570 05:47:26 -- accel/accel.sh@21 -- # val= 00:07:04.570 05:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.570 05:47:26 -- accel/accel.sh@20 -- # IFS=: 00:07:04.570 05:47:26 -- accel/accel.sh@20 -- # read -r var val 00:07:04.570 05:47:26 -- accel/accel.sh@21 -- # val=0x1 00:07:04.570 05:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.570 05:47:26 -- accel/accel.sh@20 -- # IFS=: 00:07:04.571 05:47:26 -- accel/accel.sh@20 -- # read -r var val 00:07:04.571 05:47:26 -- accel/accel.sh@21 -- # val= 00:07:04.571 05:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.571 05:47:26 -- accel/accel.sh@20 -- # IFS=: 00:07:04.571 05:47:26 -- accel/accel.sh@20 -- # read -r var val 00:07:04.571 05:47:26 -- accel/accel.sh@21 -- # val= 00:07:04.571 05:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.571 05:47:26 -- accel/accel.sh@20 -- # IFS=: 00:07:04.571 05:47:26 -- accel/accel.sh@20 -- # read -r var val 00:07:04.571 05:47:26 -- accel/accel.sh@21 -- # val=decompress 00:07:04.571 05:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.571 05:47:26 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:04.571 05:47:26 -- accel/accel.sh@20 -- # IFS=: 00:07:04.571 05:47:26 -- accel/accel.sh@20 -- # read -r var val 00:07:04.571 05:47:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:04.571 05:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.571 05:47:26 -- accel/accel.sh@20 -- # IFS=: 00:07:04.571 05:47:26 -- accel/accel.sh@20 -- # read -r var val 00:07:04.571 05:47:26 -- accel/accel.sh@21 -- # val= 00:07:04.571 05:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.571 05:47:26 -- accel/accel.sh@20 -- # IFS=: 00:07:04.571 05:47:26 -- accel/accel.sh@20 -- # read -r var val 00:07:04.571 05:47:26 -- accel/accel.sh@21 -- # val=software 00:07:04.571 05:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.571 05:47:26 -- accel/accel.sh@23 -- # accel_module=software 00:07:04.571 05:47:26 -- accel/accel.sh@20 -- # IFS=: 00:07:04.571 05:47:26 -- accel/accel.sh@20 -- # read -r var val 00:07:04.571 05:47:26 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:04.571 05:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.571 05:47:26 -- accel/accel.sh@20 -- # IFS=: 00:07:04.571 05:47:26 -- accel/accel.sh@20 -- # read -r var val 00:07:04.571 05:47:26 -- accel/accel.sh@21 -- # val=32 00:07:04.571 05:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.571 05:47:26 -- accel/accel.sh@20 -- # IFS=: 00:07:04.571 05:47:26 -- accel/accel.sh@20 -- # read -r var val 00:07:04.571 05:47:26 -- accel/accel.sh@21 -- # val=32 00:07:04.571 05:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.571 05:47:26 -- accel/accel.sh@20 -- # IFS=: 00:07:04.571 05:47:26 -- accel/accel.sh@20 -- # read -r var val 00:07:04.571 05:47:26 -- accel/accel.sh@21 -- # val=2 00:07:04.571 05:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.571 05:47:26 -- accel/accel.sh@20 -- # IFS=: 00:07:04.571 05:47:26 -- accel/accel.sh@20 -- # read -r var val 00:07:04.571 05:47:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:04.571 05:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.571 05:47:26 -- accel/accel.sh@20 -- # IFS=: 00:07:04.571 05:47:26 -- accel/accel.sh@20 -- # read -r var val 00:07:04.571 05:47:26 -- accel/accel.sh@21 -- # val=Yes 00:07:04.571 05:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.571 05:47:26 -- accel/accel.sh@20 -- # IFS=: 00:07:04.571 05:47:26 -- accel/accel.sh@20 -- # read -r var val 00:07:04.571 05:47:26 -- accel/accel.sh@21 -- # val= 00:07:04.571 05:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.571 05:47:26 -- accel/accel.sh@20 -- # IFS=: 00:07:04.571 05:47:26 -- accel/accel.sh@20 -- # read -r var val 00:07:04.571 05:47:26 -- accel/accel.sh@21 -- # val= 00:07:04.571 05:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.571 05:47:26 -- accel/accel.sh@20 -- # IFS=: 00:07:04.571 05:47:26 -- accel/accel.sh@20 -- # read -r var val 00:07:05.950 05:47:27 -- accel/accel.sh@21 -- # val= 00:07:05.950 05:47:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.950 05:47:27 -- accel/accel.sh@20 -- # IFS=: 00:07:05.950 05:47:27 -- accel/accel.sh@20 -- # read -r var val 00:07:05.950 05:47:27 -- accel/accel.sh@21 -- # val= 00:07:05.950 05:47:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.950 05:47:27 -- accel/accel.sh@20 -- # IFS=: 00:07:05.950 05:47:27 -- accel/accel.sh@20 -- # read -r var val 00:07:05.950 05:47:27 -- accel/accel.sh@21 -- # val= 00:07:05.950 05:47:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.950 05:47:27 -- accel/accel.sh@20 -- # IFS=: 00:07:05.950 05:47:27 -- accel/accel.sh@20 -- # read -r var val 00:07:05.950 05:47:27 -- accel/accel.sh@21 -- # val= 00:07:05.950 05:47:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.950 05:47:27 -- accel/accel.sh@20 -- # IFS=: 00:07:05.950 05:47:27 -- accel/accel.sh@20 -- # read -r var val 00:07:05.950 ************************************ 00:07:05.950 END TEST accel_decomp_mthread 00:07:05.950 05:47:27 -- accel/accel.sh@21 -- # val= 00:07:05.950 05:47:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.950 05:47:27 -- accel/accel.sh@20 -- # IFS=: 00:07:05.950 05:47:27 -- accel/accel.sh@20 -- # read -r var val 00:07:05.950 05:47:27 -- accel/accel.sh@21 -- # val= 00:07:05.950 05:47:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.950 05:47:27 -- accel/accel.sh@20 -- # IFS=: 00:07:05.950 05:47:27 -- accel/accel.sh@20 -- # read -r var val 00:07:05.950 05:47:27 -- accel/accel.sh@21 -- # val= 00:07:05.950 05:47:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.950 05:47:27 -- accel/accel.sh@20 -- # IFS=: 00:07:05.950 05:47:27 -- accel/accel.sh@20 -- # read -r var val 00:07:05.950 05:47:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:05.950 05:47:27 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:05.950 05:47:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.950 00:07:05.950 real 0m2.632s 00:07:05.950 user 0m2.277s 00:07:05.950 sys 0m0.156s 00:07:05.950 05:47:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:05.950 05:47:27 -- common/autotest_common.sh@10 -- # set +x 00:07:05.950 ************************************ 00:07:05.950 05:47:27 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.950 05:47:27 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:05.950 05:47:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.950 05:47:27 -- common/autotest_common.sh@10 -- # set +x 00:07:05.950 ************************************ 00:07:05.950 START TEST accel_deomp_full_mthread 00:07:05.950 ************************************ 00:07:05.950 05:47:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.950 05:47:27 -- accel/accel.sh@16 -- # local accel_opc 00:07:05.950 05:47:27 -- accel/accel.sh@17 -- # local accel_module 00:07:05.950 05:47:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.950 05:47:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.950 05:47:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.950 05:47:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.950 05:47:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.950 05:47:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.950 05:47:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.950 05:47:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.950 05:47:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.950 05:47:27 -- accel/accel.sh@42 -- # jq -r . 00:07:05.950 [2024-12-15 05:47:27.343696] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:05.950 [2024-12-15 05:47:27.343790] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68861 ] 00:07:05.950 [2024-12-15 05:47:27.479047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.950 [2024-12-15 05:47:27.509594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.329 05:47:28 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:07.329 00:07:07.330 SPDK Configuration: 00:07:07.330 Core mask: 0x1 00:07:07.330 00:07:07.330 Accel Perf Configuration: 00:07:07.330 Workload Type: decompress 00:07:07.330 Transfer size: 111250 bytes 00:07:07.330 Vector count 1 00:07:07.330 Module: software 00:07:07.330 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.330 Queue depth: 32 00:07:07.330 Allocate depth: 32 00:07:07.330 # threads/core: 2 00:07:07.330 Run time: 1 seconds 00:07:07.330 Verify: Yes 00:07:07.330 00:07:07.330 Running for 1 seconds... 00:07:07.330 00:07:07.330 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:07.330 ------------------------------------------------------------------------------------ 00:07:07.330 0,1 2688/s 111 MiB/s 0 0 00:07:07.330 0,0 2656/s 109 MiB/s 0 0 00:07:07.330 ==================================================================================== 00:07:07.330 Total 5344/s 566 MiB/s 0 0' 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # IFS=: 00:07:07.330 05:47:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # read -r var val 00:07:07.330 05:47:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:07.330 05:47:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.330 05:47:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.330 05:47:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.330 05:47:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.330 05:47:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.330 05:47:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.330 05:47:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.330 05:47:28 -- accel/accel.sh@42 -- # jq -r . 00:07:07.330 [2024-12-15 05:47:28.660191] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:07.330 [2024-12-15 05:47:28.660275] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68875 ] 00:07:07.330 [2024-12-15 05:47:28.786225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.330 [2024-12-15 05:47:28.816861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.330 05:47:28 -- accel/accel.sh@21 -- # val= 00:07:07.330 05:47:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # IFS=: 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # read -r var val 00:07:07.330 05:47:28 -- accel/accel.sh@21 -- # val= 00:07:07.330 05:47:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # IFS=: 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # read -r var val 00:07:07.330 05:47:28 -- accel/accel.sh@21 -- # val= 00:07:07.330 05:47:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # IFS=: 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # read -r var val 00:07:07.330 05:47:28 -- accel/accel.sh@21 -- # val=0x1 00:07:07.330 05:47:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # IFS=: 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # read -r var val 00:07:07.330 05:47:28 -- accel/accel.sh@21 -- # val= 00:07:07.330 05:47:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # IFS=: 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # read -r var val 00:07:07.330 05:47:28 -- accel/accel.sh@21 -- # val= 00:07:07.330 05:47:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # IFS=: 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # read -r var val 00:07:07.330 05:47:28 -- accel/accel.sh@21 -- # val=decompress 00:07:07.330 05:47:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.330 05:47:28 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # IFS=: 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # read -r var val 00:07:07.330 05:47:28 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:07.330 05:47:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # IFS=: 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # read -r var val 00:07:07.330 05:47:28 -- accel/accel.sh@21 -- # val= 00:07:07.330 05:47:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # IFS=: 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # read -r var val 00:07:07.330 05:47:28 -- accel/accel.sh@21 -- # val=software 00:07:07.330 05:47:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.330 05:47:28 -- accel/accel.sh@23 -- # accel_module=software 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # IFS=: 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # read -r var val 00:07:07.330 05:47:28 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.330 05:47:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # IFS=: 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # read -r var val 00:07:07.330 05:47:28 -- accel/accel.sh@21 -- # val=32 00:07:07.330 05:47:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # IFS=: 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # read -r var val 00:07:07.330 05:47:28 -- accel/accel.sh@21 -- # val=32 00:07:07.330 05:47:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # IFS=: 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # read -r var val 00:07:07.330 05:47:28 -- accel/accel.sh@21 -- # val=2 00:07:07.330 05:47:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # IFS=: 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # read -r var val 00:07:07.330 05:47:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:07.330 05:47:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # IFS=: 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # read -r var val 00:07:07.330 05:47:28 -- accel/accel.sh@21 -- # val=Yes 00:07:07.330 05:47:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # IFS=: 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # read -r var val 00:07:07.330 05:47:28 -- accel/accel.sh@21 -- # val= 00:07:07.330 05:47:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # IFS=: 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # read -r var val 00:07:07.330 05:47:28 -- accel/accel.sh@21 -- # val= 00:07:07.330 05:47:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # IFS=: 00:07:07.330 05:47:28 -- accel/accel.sh@20 -- # read -r var val 00:07:08.709 05:47:29 -- accel/accel.sh@21 -- # val= 00:07:08.709 05:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.709 05:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:08.709 05:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:08.709 05:47:29 -- accel/accel.sh@21 -- # val= 00:07:08.709 05:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.709 05:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:08.709 05:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:08.709 05:47:29 -- accel/accel.sh@21 -- # val= 00:07:08.709 05:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.709 05:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:08.709 05:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:08.709 05:47:29 -- accel/accel.sh@21 -- # val= 00:07:08.709 05:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.709 05:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:08.709 05:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:08.709 05:47:29 -- accel/accel.sh@21 -- # val= 00:07:08.709 05:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.709 05:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:08.709 05:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:08.709 05:47:29 -- accel/accel.sh@21 -- # val= 00:07:08.709 05:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.709 05:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:08.709 ************************************ 00:07:08.709 END TEST accel_deomp_full_mthread 00:07:08.709 ************************************ 00:07:08.709 05:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:08.709 05:47:29 -- accel/accel.sh@21 -- # val= 00:07:08.709 05:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.709 05:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:08.709 05:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:08.709 05:47:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:08.709 05:47:29 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:08.709 05:47:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.709 00:07:08.709 real 0m2.648s 00:07:08.709 user 0m2.324s 00:07:08.709 sys 0m0.123s 00:07:08.709 05:47:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:08.709 05:47:29 -- common/autotest_common.sh@10 -- # set +x 00:07:08.709 05:47:30 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:08.709 05:47:30 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:08.709 05:47:30 -- accel/accel.sh@129 -- # build_accel_config 00:07:08.709 05:47:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.709 05:47:30 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:08.709 05:47:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.709 05:47:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.709 05:47:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.709 05:47:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.709 05:47:30 -- common/autotest_common.sh@10 -- # set +x 00:07:08.709 05:47:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.709 05:47:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.709 05:47:30 -- accel/accel.sh@42 -- # jq -r . 00:07:08.709 ************************************ 00:07:08.709 START TEST accel_dif_functional_tests 00:07:08.709 ************************************ 00:07:08.709 05:47:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:08.709 [2024-12-15 05:47:30.069215] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:08.710 [2024-12-15 05:47:30.069337] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68911 ] 00:07:08.710 [2024-12-15 05:47:30.208929] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:08.710 [2024-12-15 05:47:30.244668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.710 [2024-12-15 05:47:30.244751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.710 [2024-12-15 05:47:30.244756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.710 00:07:08.710 00:07:08.710 CUnit - A unit testing framework for C - Version 2.1-3 00:07:08.710 http://cunit.sourceforge.net/ 00:07:08.710 00:07:08.710 00:07:08.710 Suite: accel_dif 00:07:08.710 Test: verify: DIF generated, GUARD check ...passed 00:07:08.710 Test: verify: DIF generated, APPTAG check ...passed 00:07:08.710 Test: verify: DIF generated, REFTAG check ...passed 00:07:08.710 Test: verify: DIF not generated, GUARD check ...passed 00:07:08.710 Test: verify: DIF not generated, APPTAG check ...passed[2024-12-15 05:47:30.291182] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:08.710 [2024-12-15 05:47:30.291297] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:08.710 [2024-12-15 05:47:30.291338] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:08.710 [2024-12-15 05:47:30.291385] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:08.710 00:07:08.710 Test: verify: DIF not generated, REFTAG check ...[2024-12-15 05:47:30.291520] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:08.710 passed 00:07:08.710 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:08.710 Test: verify: APPTAG incorrect, APPTAG check ...[2024-12-15 05:47:30.291560] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:08.710 [2024-12-15 05:47:30.291622] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:08.710 passed 00:07:08.710 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:08.710 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:08.710 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:08.710 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-12-15 05:47:30.291952] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:08.710 passed 00:07:08.710 Test: generate copy: DIF generated, GUARD check ...passed 00:07:08.710 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:08.710 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:08.710 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:08.710 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:08.710 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:08.710 Test: generate copy: iovecs-len validate ...[2024-12-15 05:47:30.292561] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:08.710 passed 00:07:08.710 Test: generate copy: buffer alignment validate ...passed 00:07:08.710 00:07:08.710 Run Summary: Type Total Ran Passed Failed Inactive 00:07:08.710 suites 1 1 n/a 0 0 00:07:08.710 tests 20 20 20 0 0 00:07:08.710 asserts 204 204 204 0 n/a 00:07:08.710 00:07:08.710 Elapsed time = 0.005 seconds 00:07:08.969 ************************************ 00:07:08.969 END TEST accel_dif_functional_tests 00:07:08.969 ************************************ 00:07:08.969 00:07:08.969 real 0m0.404s 00:07:08.969 user 0m0.449s 00:07:08.969 sys 0m0.108s 00:07:08.969 05:47:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:08.969 05:47:30 -- common/autotest_common.sh@10 -- # set +x 00:07:08.969 ************************************ 00:07:08.969 END TEST accel 00:07:08.969 ************************************ 00:07:08.969 00:07:08.969 real 0m56.540s 00:07:08.969 user 1m1.823s 00:07:08.969 sys 0m4.230s 00:07:08.969 05:47:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:08.969 05:47:30 -- common/autotest_common.sh@10 -- # set +x 00:07:08.969 05:47:30 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:08.969 05:47:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:08.969 05:47:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.969 05:47:30 -- common/autotest_common.sh@10 -- # set +x 00:07:08.969 ************************************ 00:07:08.969 START TEST accel_rpc 00:07:08.969 ************************************ 00:07:08.969 05:47:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:08.969 * Looking for test storage... 00:07:08.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:08.969 05:47:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:08.969 05:47:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:08.969 05:47:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:09.228 05:47:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:09.228 05:47:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:09.228 05:47:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:09.228 05:47:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:09.228 05:47:30 -- scripts/common.sh@335 -- # IFS=.-: 00:07:09.228 05:47:30 -- scripts/common.sh@335 -- # read -ra ver1 00:07:09.228 05:47:30 -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.228 05:47:30 -- scripts/common.sh@336 -- # read -ra ver2 00:07:09.228 05:47:30 -- scripts/common.sh@337 -- # local 'op=<' 00:07:09.228 05:47:30 -- scripts/common.sh@339 -- # ver1_l=2 00:07:09.228 05:47:30 -- scripts/common.sh@340 -- # ver2_l=1 00:07:09.228 05:47:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:09.228 05:47:30 -- scripts/common.sh@343 -- # case "$op" in 00:07:09.228 05:47:30 -- scripts/common.sh@344 -- # : 1 00:07:09.228 05:47:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:09.228 05:47:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.228 05:47:30 -- scripts/common.sh@364 -- # decimal 1 00:07:09.228 05:47:30 -- scripts/common.sh@352 -- # local d=1 00:07:09.228 05:47:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.228 05:47:30 -- scripts/common.sh@354 -- # echo 1 00:07:09.228 05:47:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:09.228 05:47:30 -- scripts/common.sh@365 -- # decimal 2 00:07:09.228 05:47:30 -- scripts/common.sh@352 -- # local d=2 00:07:09.228 05:47:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.228 05:47:30 -- scripts/common.sh@354 -- # echo 2 00:07:09.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.228 05:47:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:09.228 05:47:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:09.229 05:47:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:09.229 05:47:30 -- scripts/common.sh@367 -- # return 0 00:07:09.229 05:47:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.229 05:47:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:09.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.229 --rc genhtml_branch_coverage=1 00:07:09.229 --rc genhtml_function_coverage=1 00:07:09.229 --rc genhtml_legend=1 00:07:09.229 --rc geninfo_all_blocks=1 00:07:09.229 --rc geninfo_unexecuted_blocks=1 00:07:09.229 00:07:09.229 ' 00:07:09.229 05:47:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:09.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.229 --rc genhtml_branch_coverage=1 00:07:09.229 --rc genhtml_function_coverage=1 00:07:09.229 --rc genhtml_legend=1 00:07:09.229 --rc geninfo_all_blocks=1 00:07:09.229 --rc geninfo_unexecuted_blocks=1 00:07:09.229 00:07:09.229 ' 00:07:09.229 05:47:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:09.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.229 --rc genhtml_branch_coverage=1 00:07:09.229 --rc genhtml_function_coverage=1 00:07:09.229 --rc genhtml_legend=1 00:07:09.229 --rc geninfo_all_blocks=1 00:07:09.229 --rc geninfo_unexecuted_blocks=1 00:07:09.229 00:07:09.229 ' 00:07:09.229 05:47:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:09.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.229 --rc genhtml_branch_coverage=1 00:07:09.229 --rc genhtml_function_coverage=1 00:07:09.229 --rc genhtml_legend=1 00:07:09.229 --rc geninfo_all_blocks=1 00:07:09.229 --rc geninfo_unexecuted_blocks=1 00:07:09.229 00:07:09.229 ' 00:07:09.229 05:47:30 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:09.229 05:47:30 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=68982 00:07:09.229 05:47:30 -- accel/accel_rpc.sh@15 -- # waitforlisten 68982 00:07:09.229 05:47:30 -- common/autotest_common.sh@829 -- # '[' -z 68982 ']' 00:07:09.229 05:47:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.229 05:47:30 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:09.229 05:47:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.229 05:47:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.229 05:47:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.229 05:47:30 -- common/autotest_common.sh@10 -- # set +x 00:07:09.229 [2024-12-15 05:47:30.766387] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:09.229 [2024-12-15 05:47:30.766748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68982 ] 00:07:09.488 [2024-12-15 05:47:30.904584] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.488 [2024-12-15 05:47:30.939045] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:09.488 [2024-12-15 05:47:30.939472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.488 05:47:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.488 05:47:30 -- common/autotest_common.sh@862 -- # return 0 00:07:09.488 05:47:30 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:09.488 05:47:30 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:09.488 05:47:30 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:09.488 05:47:30 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:09.488 05:47:30 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:09.488 05:47:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:09.488 05:47:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.488 05:47:30 -- common/autotest_common.sh@10 -- # set +x 00:07:09.488 ************************************ 00:07:09.488 START TEST accel_assign_opcode 00:07:09.488 ************************************ 00:07:09.488 05:47:30 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:09.489 05:47:30 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:09.489 05:47:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.489 05:47:30 -- common/autotest_common.sh@10 -- # set +x 00:07:09.489 [2024-12-15 05:47:31.003988] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:09.489 05:47:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.489 05:47:31 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:09.489 05:47:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.489 05:47:31 -- common/autotest_common.sh@10 -- # set +x 00:07:09.489 [2024-12-15 05:47:31.011993] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:09.489 05:47:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.489 05:47:31 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:09.489 05:47:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.489 05:47:31 -- common/autotest_common.sh@10 -- # set +x 00:07:09.748 05:47:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.748 05:47:31 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:09.748 05:47:31 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:09.748 05:47:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.748 05:47:31 -- common/autotest_common.sh@10 -- # set +x 00:07:09.748 05:47:31 -- accel/accel_rpc.sh@42 -- # grep software 00:07:09.748 05:47:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.748 software 00:07:09.748 00:07:09.748 real 0m0.195s 00:07:09.748 user 0m0.057s 00:07:09.748 sys 0m0.012s 00:07:09.748 ************************************ 00:07:09.748 END TEST accel_assign_opcode 00:07:09.748 ************************************ 00:07:09.748 05:47:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.748 05:47:31 -- common/autotest_common.sh@10 -- # set +x 00:07:09.748 05:47:31 -- accel/accel_rpc.sh@55 -- # killprocess 68982 00:07:09.748 05:47:31 -- common/autotest_common.sh@936 -- # '[' -z 68982 ']' 00:07:09.748 05:47:31 -- common/autotest_common.sh@940 -- # kill -0 68982 00:07:09.748 05:47:31 -- common/autotest_common.sh@941 -- # uname 00:07:09.748 05:47:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:09.748 05:47:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68982 00:07:09.748 killing process with pid 68982 00:07:09.748 05:47:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:09.748 05:47:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:09.748 05:47:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68982' 00:07:09.748 05:47:31 -- common/autotest_common.sh@955 -- # kill 68982 00:07:09.748 05:47:31 -- common/autotest_common.sh@960 -- # wait 68982 00:07:10.007 00:07:10.007 real 0m0.960s 00:07:10.007 user 0m0.946s 00:07:10.007 sys 0m0.323s 00:07:10.007 05:47:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:10.007 ************************************ 00:07:10.007 END TEST accel_rpc 00:07:10.007 ************************************ 00:07:10.007 05:47:31 -- common/autotest_common.sh@10 -- # set +x 00:07:10.007 05:47:31 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:10.007 05:47:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:10.007 05:47:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.007 05:47:31 -- common/autotest_common.sh@10 -- # set +x 00:07:10.007 ************************************ 00:07:10.007 START TEST app_cmdline 00:07:10.007 ************************************ 00:07:10.007 05:47:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:10.007 * Looking for test storage... 00:07:10.007 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:10.007 05:47:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:10.007 05:47:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:10.007 05:47:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:10.267 05:47:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:10.267 05:47:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:10.267 05:47:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:10.267 05:47:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:10.267 05:47:31 -- scripts/common.sh@335 -- # IFS=.-: 00:07:10.267 05:47:31 -- scripts/common.sh@335 -- # read -ra ver1 00:07:10.267 05:47:31 -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.267 05:47:31 -- scripts/common.sh@336 -- # read -ra ver2 00:07:10.267 05:47:31 -- scripts/common.sh@337 -- # local 'op=<' 00:07:10.267 05:47:31 -- scripts/common.sh@339 -- # ver1_l=2 00:07:10.267 05:47:31 -- scripts/common.sh@340 -- # ver2_l=1 00:07:10.267 05:47:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:10.267 05:47:31 -- scripts/common.sh@343 -- # case "$op" in 00:07:10.267 05:47:31 -- scripts/common.sh@344 -- # : 1 00:07:10.267 05:47:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:10.267 05:47:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.267 05:47:31 -- scripts/common.sh@364 -- # decimal 1 00:07:10.267 05:47:31 -- scripts/common.sh@352 -- # local d=1 00:07:10.267 05:47:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.267 05:47:31 -- scripts/common.sh@354 -- # echo 1 00:07:10.267 05:47:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:10.267 05:47:31 -- scripts/common.sh@365 -- # decimal 2 00:07:10.267 05:47:31 -- scripts/common.sh@352 -- # local d=2 00:07:10.267 05:47:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.267 05:47:31 -- scripts/common.sh@354 -- # echo 2 00:07:10.267 05:47:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:10.267 05:47:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:10.267 05:47:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:10.267 05:47:31 -- scripts/common.sh@367 -- # return 0 00:07:10.267 05:47:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.267 05:47:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:10.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.267 --rc genhtml_branch_coverage=1 00:07:10.267 --rc genhtml_function_coverage=1 00:07:10.267 --rc genhtml_legend=1 00:07:10.267 --rc geninfo_all_blocks=1 00:07:10.267 --rc geninfo_unexecuted_blocks=1 00:07:10.267 00:07:10.267 ' 00:07:10.267 05:47:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:10.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.267 --rc genhtml_branch_coverage=1 00:07:10.267 --rc genhtml_function_coverage=1 00:07:10.267 --rc genhtml_legend=1 00:07:10.267 --rc geninfo_all_blocks=1 00:07:10.267 --rc geninfo_unexecuted_blocks=1 00:07:10.267 00:07:10.267 ' 00:07:10.267 05:47:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:10.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.267 --rc genhtml_branch_coverage=1 00:07:10.267 --rc genhtml_function_coverage=1 00:07:10.267 --rc genhtml_legend=1 00:07:10.267 --rc geninfo_all_blocks=1 00:07:10.267 --rc geninfo_unexecuted_blocks=1 00:07:10.267 00:07:10.267 ' 00:07:10.267 05:47:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:10.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.267 --rc genhtml_branch_coverage=1 00:07:10.267 --rc genhtml_function_coverage=1 00:07:10.267 --rc genhtml_legend=1 00:07:10.267 --rc geninfo_all_blocks=1 00:07:10.267 --rc geninfo_unexecuted_blocks=1 00:07:10.267 00:07:10.267 ' 00:07:10.267 05:47:31 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:10.267 05:47:31 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:10.267 05:47:31 -- app/cmdline.sh@17 -- # spdk_tgt_pid=69069 00:07:10.267 05:47:31 -- app/cmdline.sh@18 -- # waitforlisten 69069 00:07:10.267 05:47:31 -- common/autotest_common.sh@829 -- # '[' -z 69069 ']' 00:07:10.267 05:47:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.267 05:47:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.267 05:47:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.267 05:47:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.267 05:47:31 -- common/autotest_common.sh@10 -- # set +x 00:07:10.267 [2024-12-15 05:47:31.748132] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:10.267 [2024-12-15 05:47:31.748210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69069 ] 00:07:10.267 [2024-12-15 05:47:31.873127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.267 [2024-12-15 05:47:31.905488] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:10.526 [2024-12-15 05:47:31.905966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.095 05:47:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:11.095 05:47:32 -- common/autotest_common.sh@862 -- # return 0 00:07:11.095 05:47:32 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:11.367 { 00:07:11.367 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:07:11.367 "fields": { 00:07:11.367 "major": 24, 00:07:11.367 "minor": 1, 00:07:11.367 "patch": 1, 00:07:11.367 "suffix": "-pre", 00:07:11.367 "commit": "c13c99a5e" 00:07:11.367 } 00:07:11.367 } 00:07:11.367 05:47:32 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:11.367 05:47:32 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:11.367 05:47:32 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:11.367 05:47:32 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:11.367 05:47:32 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:11.367 05:47:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.367 05:47:32 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:11.367 05:47:32 -- common/autotest_common.sh@10 -- # set +x 00:07:11.367 05:47:32 -- app/cmdline.sh@26 -- # sort 00:07:11.367 05:47:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.367 05:47:32 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:11.367 05:47:32 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:11.367 05:47:32 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:11.367 05:47:32 -- common/autotest_common.sh@650 -- # local es=0 00:07:11.367 05:47:32 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:11.367 05:47:32 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:11.367 05:47:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.367 05:47:32 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:11.367 05:47:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.367 05:47:32 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:11.367 05:47:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.367 05:47:32 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:11.367 05:47:32 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:11.367 05:47:32 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:11.641 request: 00:07:11.641 { 00:07:11.641 "method": "env_dpdk_get_mem_stats", 00:07:11.641 "req_id": 1 00:07:11.641 } 00:07:11.641 Got JSON-RPC error response 00:07:11.641 response: 00:07:11.641 { 00:07:11.641 "code": -32601, 00:07:11.641 "message": "Method not found" 00:07:11.641 } 00:07:11.641 05:47:33 -- common/autotest_common.sh@653 -- # es=1 00:07:11.641 05:47:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:11.641 05:47:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:11.641 05:47:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:11.641 05:47:33 -- app/cmdline.sh@1 -- # killprocess 69069 00:07:11.641 05:47:33 -- common/autotest_common.sh@936 -- # '[' -z 69069 ']' 00:07:11.641 05:47:33 -- common/autotest_common.sh@940 -- # kill -0 69069 00:07:11.641 05:47:33 -- common/autotest_common.sh@941 -- # uname 00:07:11.641 05:47:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:11.641 05:47:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69069 00:07:11.900 killing process with pid 69069 00:07:11.900 05:47:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:11.900 05:47:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:11.900 05:47:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69069' 00:07:11.900 05:47:33 -- common/autotest_common.sh@955 -- # kill 69069 00:07:11.900 05:47:33 -- common/autotest_common.sh@960 -- # wait 69069 00:07:11.900 ************************************ 00:07:11.900 END TEST app_cmdline 00:07:11.900 ************************************ 00:07:11.900 00:07:11.900 real 0m1.966s 00:07:11.900 user 0m2.529s 00:07:11.900 sys 0m0.376s 00:07:11.900 05:47:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:11.900 05:47:33 -- common/autotest_common.sh@10 -- # set +x 00:07:12.160 05:47:33 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:12.160 05:47:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:12.160 05:47:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.160 05:47:33 -- common/autotest_common.sh@10 -- # set +x 00:07:12.160 ************************************ 00:07:12.160 START TEST version 00:07:12.160 ************************************ 00:07:12.160 05:47:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:12.160 * Looking for test storage... 00:07:12.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:12.160 05:47:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:12.160 05:47:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:12.160 05:47:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:12.160 05:47:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:12.160 05:47:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:12.160 05:47:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:12.160 05:47:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:12.160 05:47:33 -- scripts/common.sh@335 -- # IFS=.-: 00:07:12.160 05:47:33 -- scripts/common.sh@335 -- # read -ra ver1 00:07:12.160 05:47:33 -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.160 05:47:33 -- scripts/common.sh@336 -- # read -ra ver2 00:07:12.160 05:47:33 -- scripts/common.sh@337 -- # local 'op=<' 00:07:12.160 05:47:33 -- scripts/common.sh@339 -- # ver1_l=2 00:07:12.160 05:47:33 -- scripts/common.sh@340 -- # ver2_l=1 00:07:12.160 05:47:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:12.160 05:47:33 -- scripts/common.sh@343 -- # case "$op" in 00:07:12.160 05:47:33 -- scripts/common.sh@344 -- # : 1 00:07:12.160 05:47:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:12.160 05:47:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.160 05:47:33 -- scripts/common.sh@364 -- # decimal 1 00:07:12.160 05:47:33 -- scripts/common.sh@352 -- # local d=1 00:07:12.160 05:47:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.160 05:47:33 -- scripts/common.sh@354 -- # echo 1 00:07:12.160 05:47:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:12.160 05:47:33 -- scripts/common.sh@365 -- # decimal 2 00:07:12.160 05:47:33 -- scripts/common.sh@352 -- # local d=2 00:07:12.160 05:47:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.160 05:47:33 -- scripts/common.sh@354 -- # echo 2 00:07:12.160 05:47:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:12.160 05:47:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:12.160 05:47:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:12.160 05:47:33 -- scripts/common.sh@367 -- # return 0 00:07:12.160 05:47:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.160 05:47:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:12.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.160 --rc genhtml_branch_coverage=1 00:07:12.160 --rc genhtml_function_coverage=1 00:07:12.160 --rc genhtml_legend=1 00:07:12.160 --rc geninfo_all_blocks=1 00:07:12.160 --rc geninfo_unexecuted_blocks=1 00:07:12.160 00:07:12.160 ' 00:07:12.160 05:47:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:12.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.160 --rc genhtml_branch_coverage=1 00:07:12.160 --rc genhtml_function_coverage=1 00:07:12.160 --rc genhtml_legend=1 00:07:12.160 --rc geninfo_all_blocks=1 00:07:12.160 --rc geninfo_unexecuted_blocks=1 00:07:12.160 00:07:12.160 ' 00:07:12.160 05:47:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:12.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.160 --rc genhtml_branch_coverage=1 00:07:12.160 --rc genhtml_function_coverage=1 00:07:12.160 --rc genhtml_legend=1 00:07:12.160 --rc geninfo_all_blocks=1 00:07:12.160 --rc geninfo_unexecuted_blocks=1 00:07:12.160 00:07:12.160 ' 00:07:12.160 05:47:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:12.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.160 --rc genhtml_branch_coverage=1 00:07:12.160 --rc genhtml_function_coverage=1 00:07:12.160 --rc genhtml_legend=1 00:07:12.160 --rc geninfo_all_blocks=1 00:07:12.160 --rc geninfo_unexecuted_blocks=1 00:07:12.160 00:07:12.160 ' 00:07:12.160 05:47:33 -- app/version.sh@17 -- # get_header_version major 00:07:12.160 05:47:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:12.160 05:47:33 -- app/version.sh@14 -- # tr -d '"' 00:07:12.160 05:47:33 -- app/version.sh@14 -- # cut -f2 00:07:12.160 05:47:33 -- app/version.sh@17 -- # major=24 00:07:12.160 05:47:33 -- app/version.sh@18 -- # get_header_version minor 00:07:12.160 05:47:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:12.160 05:47:33 -- app/version.sh@14 -- # cut -f2 00:07:12.160 05:47:33 -- app/version.sh@14 -- # tr -d '"' 00:07:12.160 05:47:33 -- app/version.sh@18 -- # minor=1 00:07:12.160 05:47:33 -- app/version.sh@19 -- # get_header_version patch 00:07:12.160 05:47:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:12.160 05:47:33 -- app/version.sh@14 -- # cut -f2 00:07:12.160 05:47:33 -- app/version.sh@14 -- # tr -d '"' 00:07:12.160 05:47:33 -- app/version.sh@19 -- # patch=1 00:07:12.160 05:47:33 -- app/version.sh@20 -- # get_header_version suffix 00:07:12.160 05:47:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:12.160 05:47:33 -- app/version.sh@14 -- # tr -d '"' 00:07:12.160 05:47:33 -- app/version.sh@14 -- # cut -f2 00:07:12.160 05:47:33 -- app/version.sh@20 -- # suffix=-pre 00:07:12.160 05:47:33 -- app/version.sh@22 -- # version=24.1 00:07:12.160 05:47:33 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:12.160 05:47:33 -- app/version.sh@25 -- # version=24.1.1 00:07:12.160 05:47:33 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:12.160 05:47:33 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:12.160 05:47:33 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:12.419 05:47:33 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:12.420 05:47:33 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:12.420 ************************************ 00:07:12.420 END TEST version 00:07:12.420 ************************************ 00:07:12.420 00:07:12.420 real 0m0.249s 00:07:12.420 user 0m0.168s 00:07:12.420 sys 0m0.117s 00:07:12.420 05:47:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.420 05:47:33 -- common/autotest_common.sh@10 -- # set +x 00:07:12.420 05:47:33 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:12.420 05:47:33 -- spdk/autotest.sh@191 -- # uname -s 00:07:12.420 05:47:33 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:12.420 05:47:33 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:12.420 05:47:33 -- spdk/autotest.sh@192 -- # [[ 1 -eq 1 ]] 00:07:12.420 05:47:33 -- spdk/autotest.sh@198 -- # [[ 0 -eq 0 ]] 00:07:12.420 05:47:33 -- spdk/autotest.sh@199 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:12.420 05:47:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:12.420 05:47:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.420 05:47:33 -- common/autotest_common.sh@10 -- # set +x 00:07:12.420 ************************************ 00:07:12.420 START TEST spdk_dd 00:07:12.420 ************************************ 00:07:12.420 05:47:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:12.420 * Looking for test storage... 00:07:12.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:12.420 05:47:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:12.420 05:47:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:12.420 05:47:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:12.420 05:47:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:12.420 05:47:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:12.420 05:47:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:12.420 05:47:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:12.420 05:47:34 -- scripts/common.sh@335 -- # IFS=.-: 00:07:12.420 05:47:34 -- scripts/common.sh@335 -- # read -ra ver1 00:07:12.420 05:47:34 -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.420 05:47:34 -- scripts/common.sh@336 -- # read -ra ver2 00:07:12.420 05:47:34 -- scripts/common.sh@337 -- # local 'op=<' 00:07:12.420 05:47:34 -- scripts/common.sh@339 -- # ver1_l=2 00:07:12.420 05:47:34 -- scripts/common.sh@340 -- # ver2_l=1 00:07:12.420 05:47:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:12.420 05:47:34 -- scripts/common.sh@343 -- # case "$op" in 00:07:12.420 05:47:34 -- scripts/common.sh@344 -- # : 1 00:07:12.420 05:47:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:12.420 05:47:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.420 05:47:34 -- scripts/common.sh@364 -- # decimal 1 00:07:12.420 05:47:34 -- scripts/common.sh@352 -- # local d=1 00:07:12.420 05:47:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.420 05:47:34 -- scripts/common.sh@354 -- # echo 1 00:07:12.420 05:47:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:12.420 05:47:34 -- scripts/common.sh@365 -- # decimal 2 00:07:12.420 05:47:34 -- scripts/common.sh@352 -- # local d=2 00:07:12.420 05:47:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.420 05:47:34 -- scripts/common.sh@354 -- # echo 2 00:07:12.420 05:47:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:12.420 05:47:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:12.420 05:47:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:12.420 05:47:34 -- scripts/common.sh@367 -- # return 0 00:07:12.420 05:47:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.420 05:47:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:12.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.420 --rc genhtml_branch_coverage=1 00:07:12.420 --rc genhtml_function_coverage=1 00:07:12.420 --rc genhtml_legend=1 00:07:12.420 --rc geninfo_all_blocks=1 00:07:12.420 --rc geninfo_unexecuted_blocks=1 00:07:12.420 00:07:12.420 ' 00:07:12.420 05:47:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:12.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.420 --rc genhtml_branch_coverage=1 00:07:12.420 --rc genhtml_function_coverage=1 00:07:12.420 --rc genhtml_legend=1 00:07:12.420 --rc geninfo_all_blocks=1 00:07:12.420 --rc geninfo_unexecuted_blocks=1 00:07:12.420 00:07:12.420 ' 00:07:12.420 05:47:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:12.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.420 --rc genhtml_branch_coverage=1 00:07:12.420 --rc genhtml_function_coverage=1 00:07:12.420 --rc genhtml_legend=1 00:07:12.420 --rc geninfo_all_blocks=1 00:07:12.420 --rc geninfo_unexecuted_blocks=1 00:07:12.420 00:07:12.420 ' 00:07:12.420 05:47:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:12.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.420 --rc genhtml_branch_coverage=1 00:07:12.420 --rc genhtml_function_coverage=1 00:07:12.420 --rc genhtml_legend=1 00:07:12.420 --rc geninfo_all_blocks=1 00:07:12.420 --rc geninfo_unexecuted_blocks=1 00:07:12.420 00:07:12.420 ' 00:07:12.420 05:47:34 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:12.420 05:47:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.420 05:47:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.420 05:47:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.420 05:47:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.420 05:47:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.420 05:47:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.420 05:47:34 -- paths/export.sh@5 -- # export PATH 00:07:12.420 05:47:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.420 05:47:34 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:12.990 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:12.990 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:12.990 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:12.990 05:47:34 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:12.990 05:47:34 -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:12.990 05:47:34 -- scripts/common.sh@311 -- # local bdf bdfs 00:07:12.990 05:47:34 -- scripts/common.sh@312 -- # local nvmes 00:07:12.990 05:47:34 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:07:12.990 05:47:34 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:12.990 05:47:34 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:07:12.990 05:47:34 -- scripts/common.sh@297 -- # local bdf= 00:07:12.990 05:47:34 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:07:12.990 05:47:34 -- scripts/common.sh@232 -- # local class 00:07:12.990 05:47:34 -- scripts/common.sh@233 -- # local subclass 00:07:12.990 05:47:34 -- scripts/common.sh@234 -- # local progif 00:07:12.990 05:47:34 -- scripts/common.sh@235 -- # printf %02x 1 00:07:12.990 05:47:34 -- scripts/common.sh@235 -- # class=01 00:07:12.990 05:47:34 -- scripts/common.sh@236 -- # printf %02x 8 00:07:12.990 05:47:34 -- scripts/common.sh@236 -- # subclass=08 00:07:12.990 05:47:34 -- scripts/common.sh@237 -- # printf %02x 2 00:07:12.990 05:47:34 -- scripts/common.sh@237 -- # progif=02 00:07:12.990 05:47:34 -- scripts/common.sh@239 -- # hash lspci 00:07:12.990 05:47:34 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:07:12.990 05:47:34 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:07:12.990 05:47:34 -- scripts/common.sh@242 -- # grep -i -- -p02 00:07:12.990 05:47:34 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:12.990 05:47:34 -- scripts/common.sh@244 -- # tr -d '"' 00:07:12.990 05:47:34 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:12.990 05:47:34 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:07:12.990 05:47:34 -- scripts/common.sh@15 -- # local i 00:07:12.990 05:47:34 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:07:12.990 05:47:34 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:12.990 05:47:34 -- scripts/common.sh@24 -- # return 0 00:07:12.990 05:47:34 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:07:12.990 05:47:34 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:12.990 05:47:34 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:07:12.990 05:47:34 -- scripts/common.sh@15 -- # local i 00:07:12.990 05:47:34 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:07:12.990 05:47:34 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:12.990 05:47:34 -- scripts/common.sh@24 -- # return 0 00:07:12.990 05:47:34 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:07:12.990 05:47:34 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:07:12.990 05:47:34 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:07:12.990 05:47:34 -- scripts/common.sh@322 -- # uname -s 00:07:12.990 05:47:34 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:07:12.990 05:47:34 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:07:12.990 05:47:34 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:07:12.990 05:47:34 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:07:12.990 05:47:34 -- scripts/common.sh@322 -- # uname -s 00:07:12.990 05:47:34 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:07:12.990 05:47:34 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:07:12.990 05:47:34 -- scripts/common.sh@327 -- # (( 2 )) 00:07:12.990 05:47:34 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:07:12.990 05:47:34 -- dd/dd.sh@13 -- # check_liburing 00:07:12.990 05:47:34 -- dd/common.sh@139 -- # local lib so 00:07:12.990 05:47:34 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:12.990 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.990 05:47:34 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:07:12.990 05:47:34 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:12.990 05:47:34 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:07:12.990 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.990 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.5.0 == liburing.so.* ]] 00:07:12.990 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.990 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.5.0 == liburing.so.* ]] 00:07:12.990 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.990 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.6.0 == liburing.so.* ]] 00:07:12.990 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.990 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.5.0 == liburing.so.* ]] 00:07:12.990 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.990 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.5.0 == liburing.so.* ]] 00:07:12.990 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.990 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.5.0 == liburing.so.* ]] 00:07:12.990 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.990 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.5.0 == liburing.so.* ]] 00:07:12.990 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.990 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.5.0 == liburing.so.* ]] 00:07:12.990 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.990 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.5.0 == liburing.so.* ]] 00:07:12.990 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.990 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.5.0 == liburing.so.* ]] 00:07:12.990 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.990 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.5.0 == liburing.so.* ]] 00:07:12.990 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.5.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.9.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.10.1 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.9.1 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_blob.so.10.1 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.12.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.5.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.5.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.5.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.8.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.5.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.6.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.4.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.5.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.5.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.1.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.5.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.6.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.4.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.2.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.11.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.3.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.13.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.3.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.3.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.5.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.4.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_event.so.12.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.5.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.14.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_notify.so.5.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.5.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_accel.so.14.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_dma.so.3.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.5.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.5.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.4.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_sock.so.8.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.2.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_init.so.4.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_thread.so.9.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_trace.so.9.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.5.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.5.1 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_json.so.5.1 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_util.so.8.0 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ libspdk_log.so.6.1 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.23 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.23 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ librte_dmadev.so.23 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ librte_eal.so.23 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.991 05:47:34 -- dd/common.sh@143 -- # [[ librte_ethdev.so.23 == liburing.so.* ]] 00:07:12.991 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.992 05:47:34 -- dd/common.sh@143 -- # [[ librte_hash.so.23 == liburing.so.* ]] 00:07:12.992 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.992 05:47:34 -- dd/common.sh@143 -- # [[ librte_kvargs.so.23 == liburing.so.* ]] 00:07:12.992 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.992 05:47:34 -- dd/common.sh@143 -- # [[ librte_mbuf.so.23 == liburing.so.* ]] 00:07:12.992 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.992 05:47:34 -- dd/common.sh@143 -- # [[ librte_mempool.so.23 == liburing.so.* ]] 00:07:12.992 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.992 05:47:34 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.23 == liburing.so.* ]] 00:07:12.992 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.992 05:47:34 -- dd/common.sh@143 -- # [[ librte_net.so.23 == liburing.so.* ]] 00:07:12.992 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.992 05:47:34 -- dd/common.sh@143 -- # [[ librte_pci.so.23 == liburing.so.* ]] 00:07:12.992 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.992 05:47:34 -- dd/common.sh@143 -- # [[ librte_power.so.23 == liburing.so.* ]] 00:07:12.992 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.992 05:47:34 -- dd/common.sh@143 -- # [[ librte_rcu.so.23 == liburing.so.* ]] 00:07:12.992 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.992 05:47:34 -- dd/common.sh@143 -- # [[ librte_ring.so.23 == liburing.so.* ]] 00:07:12.992 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.992 05:47:34 -- dd/common.sh@143 -- # [[ librte_telemetry.so.23 == liburing.so.* ]] 00:07:12.992 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.992 05:47:34 -- dd/common.sh@143 -- # [[ librte_vhost.so.23 == liburing.so.* ]] 00:07:12.992 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.992 05:47:34 -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:07:12.992 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.992 05:47:34 -- dd/common.sh@143 -- # [[ libaccel-config.so.1 == liburing.so.* ]] 00:07:12.992 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.992 05:47:34 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:07:12.992 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.992 05:47:34 -- dd/common.sh@143 -- # [[ libiscsi.so.9 == liburing.so.* ]] 00:07:12.992 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.992 05:47:34 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:07:12.992 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.992 05:47:34 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:07:12.992 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.992 05:47:34 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:07:12.992 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.992 05:47:34 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:07:12.992 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.992 05:47:34 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:07:12.992 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.992 05:47:34 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:07:12.992 05:47:34 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:12.992 05:47:34 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:12.992 05:47:34 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:12.992 * spdk_dd linked to liburing 00:07:12.992 05:47:34 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:12.992 05:47:34 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:12.992 05:47:34 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:12.992 05:47:34 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:12.992 05:47:34 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:12.992 05:47:34 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:12.992 05:47:34 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:12.992 05:47:34 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:12.992 05:47:34 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:12.992 05:47:34 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:12.992 05:47:34 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:12.992 05:47:34 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:12.992 05:47:34 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:12.992 05:47:34 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:12.992 05:47:34 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:12.992 05:47:34 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:12.992 05:47:34 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:12.992 05:47:34 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:12.992 05:47:34 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:12.992 05:47:34 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:12.992 05:47:34 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:12.992 05:47:34 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:12.992 05:47:34 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:12.992 05:47:34 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:12.992 05:47:34 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:12.992 05:47:34 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:12.992 05:47:34 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:12.992 05:47:34 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:12.992 05:47:34 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:12.992 05:47:34 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:12.992 05:47:34 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:12.992 05:47:34 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:12.992 05:47:34 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:12.992 05:47:34 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:12.992 05:47:34 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:12.992 05:47:34 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:12.992 05:47:34 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:12.992 05:47:34 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:12.992 05:47:34 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:12.992 05:47:34 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:12.992 05:47:34 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:12.992 05:47:34 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:12.992 05:47:34 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:12.992 05:47:34 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:12.992 05:47:34 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:12.992 05:47:34 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:12.992 05:47:34 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:12.992 05:47:34 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:12.992 05:47:34 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:12.992 05:47:34 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:12.992 05:47:34 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:12.992 05:47:34 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:12.992 05:47:34 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:12.992 05:47:34 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:12.992 05:47:34 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=y 00:07:12.992 05:47:34 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:12.992 05:47:34 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:12.993 05:47:34 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:12.993 05:47:34 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:12.993 05:47:34 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:12.993 05:47:34 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:12.993 05:47:34 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:12.993 05:47:34 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:12.993 05:47:34 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:12.993 05:47:34 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:12.993 05:47:34 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:12.993 05:47:34 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:12.993 05:47:34 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:12.993 05:47:34 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:12.993 05:47:34 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:12.993 05:47:34 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:12.993 05:47:34 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:12.993 05:47:34 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:12.993 05:47:34 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:12.993 05:47:34 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:12.993 05:47:34 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:12.993 05:47:34 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:12.993 05:47:34 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:12.993 05:47:34 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:12.993 05:47:34 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:12.993 05:47:34 -- common/build_config.sh@79 -- # CONFIG_URING=y 00:07:12.993 05:47:34 -- dd/common.sh@149 -- # [[ y != y ]] 00:07:12.993 05:47:34 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:07:12.993 05:47:34 -- dd/common.sh@156 -- # export liburing_in_use=1 00:07:12.993 05:47:34 -- dd/common.sh@156 -- # liburing_in_use=1 00:07:12.993 05:47:34 -- dd/common.sh@157 -- # return 0 00:07:12.993 05:47:34 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:12.993 05:47:34 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:07:12.993 05:47:34 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:12.993 05:47:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.993 05:47:34 -- common/autotest_common.sh@10 -- # set +x 00:07:12.993 ************************************ 00:07:12.993 START TEST spdk_dd_basic_rw 00:07:12.993 ************************************ 00:07:12.993 05:47:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:07:12.993 * Looking for test storage... 00:07:12.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:12.993 05:47:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:12.993 05:47:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:12.993 05:47:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:13.252 05:47:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:13.252 05:47:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:13.252 05:47:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:13.252 05:47:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:13.253 05:47:34 -- scripts/common.sh@335 -- # IFS=.-: 00:07:13.253 05:47:34 -- scripts/common.sh@335 -- # read -ra ver1 00:07:13.253 05:47:34 -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.253 05:47:34 -- scripts/common.sh@336 -- # read -ra ver2 00:07:13.253 05:47:34 -- scripts/common.sh@337 -- # local 'op=<' 00:07:13.253 05:47:34 -- scripts/common.sh@339 -- # ver1_l=2 00:07:13.253 05:47:34 -- scripts/common.sh@340 -- # ver2_l=1 00:07:13.253 05:47:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:13.253 05:47:34 -- scripts/common.sh@343 -- # case "$op" in 00:07:13.253 05:47:34 -- scripts/common.sh@344 -- # : 1 00:07:13.253 05:47:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:13.253 05:47:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.253 05:47:34 -- scripts/common.sh@364 -- # decimal 1 00:07:13.253 05:47:34 -- scripts/common.sh@352 -- # local d=1 00:07:13.253 05:47:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.253 05:47:34 -- scripts/common.sh@354 -- # echo 1 00:07:13.253 05:47:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:13.253 05:47:34 -- scripts/common.sh@365 -- # decimal 2 00:07:13.253 05:47:34 -- scripts/common.sh@352 -- # local d=2 00:07:13.253 05:47:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.253 05:47:34 -- scripts/common.sh@354 -- # echo 2 00:07:13.253 05:47:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:13.253 05:47:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:13.253 05:47:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:13.253 05:47:34 -- scripts/common.sh@367 -- # return 0 00:07:13.253 05:47:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.253 05:47:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:13.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.253 --rc genhtml_branch_coverage=1 00:07:13.253 --rc genhtml_function_coverage=1 00:07:13.253 --rc genhtml_legend=1 00:07:13.253 --rc geninfo_all_blocks=1 00:07:13.253 --rc geninfo_unexecuted_blocks=1 00:07:13.253 00:07:13.253 ' 00:07:13.253 05:47:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:13.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.253 --rc genhtml_branch_coverage=1 00:07:13.253 --rc genhtml_function_coverage=1 00:07:13.253 --rc genhtml_legend=1 00:07:13.253 --rc geninfo_all_blocks=1 00:07:13.253 --rc geninfo_unexecuted_blocks=1 00:07:13.253 00:07:13.253 ' 00:07:13.253 05:47:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:13.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.253 --rc genhtml_branch_coverage=1 00:07:13.253 --rc genhtml_function_coverage=1 00:07:13.253 --rc genhtml_legend=1 00:07:13.253 --rc geninfo_all_blocks=1 00:07:13.253 --rc geninfo_unexecuted_blocks=1 00:07:13.253 00:07:13.253 ' 00:07:13.253 05:47:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:13.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.253 --rc genhtml_branch_coverage=1 00:07:13.253 --rc genhtml_function_coverage=1 00:07:13.253 --rc genhtml_legend=1 00:07:13.253 --rc geninfo_all_blocks=1 00:07:13.253 --rc geninfo_unexecuted_blocks=1 00:07:13.253 00:07:13.253 ' 00:07:13.253 05:47:34 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:13.253 05:47:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.253 05:47:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.253 05:47:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.253 05:47:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.253 05:47:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.253 05:47:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.253 05:47:34 -- paths/export.sh@5 -- # export PATH 00:07:13.253 05:47:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.253 05:47:34 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:13.253 05:47:34 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:13.253 05:47:34 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:13.253 05:47:34 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:07:13.253 05:47:34 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:13.253 05:47:34 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:07:13.253 05:47:34 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:13.253 05:47:34 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:13.253 05:47:34 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:13.253 05:47:34 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:07:13.253 05:47:34 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:07:13.253 05:47:34 -- dd/common.sh@126 -- # mapfile -t id 00:07:13.253 05:47:34 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:07:13.515 05:47:34 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 96 Data Units Written: 9 Host Read Commands: 2188 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:13.515 05:47:34 -- dd/common.sh@130 -- # lbaf=04 00:07:13.515 05:47:34 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 96 Data Units Written: 9 Host Read Commands: 2188 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:13.515 05:47:34 -- dd/common.sh@132 -- # lbaf=4096 00:07:13.515 05:47:34 -- dd/common.sh@134 -- # echo 4096 00:07:13.515 05:47:34 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:13.515 05:47:34 -- dd/basic_rw.sh@96 -- # : 00:07:13.515 05:47:34 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:13.515 05:47:34 -- dd/basic_rw.sh@96 -- # gen_conf 00:07:13.515 05:47:34 -- dd/common.sh@31 -- # xtrace_disable 00:07:13.515 05:47:34 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:13.515 05:47:34 -- common/autotest_common.sh@10 -- # set +x 00:07:13.515 05:47:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.515 05:47:34 -- common/autotest_common.sh@10 -- # set +x 00:07:13.515 ************************************ 00:07:13.515 START TEST dd_bs_lt_native_bs 00:07:13.515 ************************************ 00:07:13.515 05:47:34 -- common/autotest_common.sh@1114 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:13.515 05:47:34 -- common/autotest_common.sh@650 -- # local es=0 00:07:13.515 05:47:34 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:13.515 05:47:34 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.515 05:47:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.515 05:47:34 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.515 05:47:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.515 05:47:34 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.515 05:47:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.515 05:47:34 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.515 05:47:34 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:13.515 05:47:34 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:13.515 { 00:07:13.515 "subsystems": [ 00:07:13.515 { 00:07:13.515 "subsystem": "bdev", 00:07:13.515 "config": [ 00:07:13.515 { 00:07:13.515 "params": { 00:07:13.515 "trtype": "pcie", 00:07:13.515 "traddr": "0000:00:06.0", 00:07:13.515 "name": "Nvme0" 00:07:13.515 }, 00:07:13.515 "method": "bdev_nvme_attach_controller" 00:07:13.515 }, 00:07:13.515 { 00:07:13.515 "method": "bdev_wait_for_examine" 00:07:13.515 } 00:07:13.515 ] 00:07:13.515 } 00:07:13.515 ] 00:07:13.515 } 00:07:13.515 [2024-12-15 05:47:34.979526] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:13.516 [2024-12-15 05:47:34.980028] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69413 ] 00:07:13.516 [2024-12-15 05:47:35.119630] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.774 [2024-12-15 05:47:35.160429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.774 [2024-12-15 05:47:35.279493] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:13.774 [2024-12-15 05:47:35.279583] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:13.774 [2024-12-15 05:47:35.350977] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:14.033 05:47:35 -- common/autotest_common.sh@653 -- # es=234 00:07:14.033 05:47:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:14.033 05:47:35 -- common/autotest_common.sh@662 -- # es=106 00:07:14.033 ************************************ 00:07:14.033 END TEST dd_bs_lt_native_bs 00:07:14.033 ************************************ 00:07:14.033 05:47:35 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:14.033 05:47:35 -- common/autotest_common.sh@670 -- # es=1 00:07:14.033 05:47:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:14.033 00:07:14.033 real 0m0.494s 00:07:14.033 user 0m0.323s 00:07:14.033 sys 0m0.126s 00:07:14.033 05:47:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.033 05:47:35 -- common/autotest_common.sh@10 -- # set +x 00:07:14.033 05:47:35 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:14.033 05:47:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:14.033 05:47:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.033 05:47:35 -- common/autotest_common.sh@10 -- # set +x 00:07:14.033 ************************************ 00:07:14.033 START TEST dd_rw 00:07:14.033 ************************************ 00:07:14.033 05:47:35 -- common/autotest_common.sh@1114 -- # basic_rw 4096 00:07:14.033 05:47:35 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:14.033 05:47:35 -- dd/basic_rw.sh@12 -- # local count size 00:07:14.033 05:47:35 -- dd/basic_rw.sh@13 -- # local qds bss 00:07:14.033 05:47:35 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:14.033 05:47:35 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:14.033 05:47:35 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:14.033 05:47:35 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:14.033 05:47:35 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:14.033 05:47:35 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:14.033 05:47:35 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:14.033 05:47:35 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:14.033 05:47:35 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:14.033 05:47:35 -- dd/basic_rw.sh@23 -- # count=15 00:07:14.033 05:47:35 -- dd/basic_rw.sh@24 -- # count=15 00:07:14.033 05:47:35 -- dd/basic_rw.sh@25 -- # size=61440 00:07:14.033 05:47:35 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:14.033 05:47:35 -- dd/common.sh@98 -- # xtrace_disable 00:07:14.033 05:47:35 -- common/autotest_common.sh@10 -- # set +x 00:07:14.600 05:47:36 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:14.600 05:47:36 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:14.600 05:47:36 -- dd/common.sh@31 -- # xtrace_disable 00:07:14.600 05:47:36 -- common/autotest_common.sh@10 -- # set +x 00:07:14.600 [2024-12-15 05:47:36.054104] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:14.600 [2024-12-15 05:47:36.054374] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69448 ] 00:07:14.600 { 00:07:14.600 "subsystems": [ 00:07:14.600 { 00:07:14.600 "subsystem": "bdev", 00:07:14.600 "config": [ 00:07:14.600 { 00:07:14.600 "params": { 00:07:14.600 "trtype": "pcie", 00:07:14.600 "traddr": "0000:00:06.0", 00:07:14.600 "name": "Nvme0" 00:07:14.600 }, 00:07:14.600 "method": "bdev_nvme_attach_controller" 00:07:14.600 }, 00:07:14.600 { 00:07:14.600 "method": "bdev_wait_for_examine" 00:07:14.600 } 00:07:14.600 ] 00:07:14.600 } 00:07:14.600 ] 00:07:14.600 } 00:07:14.600 [2024-12-15 05:47:36.191954] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.600 [2024-12-15 05:47:36.225638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.859  [2024-12-15T05:47:36.500Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:14.859 00:07:14.859 05:47:36 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:14.859 05:47:36 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:14.859 05:47:36 -- dd/common.sh@31 -- # xtrace_disable 00:07:14.859 05:47:36 -- common/autotest_common.sh@10 -- # set +x 00:07:15.118 [2024-12-15 05:47:36.529934] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:15.118 [2024-12-15 05:47:36.530683] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69457 ] 00:07:15.118 { 00:07:15.118 "subsystems": [ 00:07:15.118 { 00:07:15.118 "subsystem": "bdev", 00:07:15.118 "config": [ 00:07:15.118 { 00:07:15.118 "params": { 00:07:15.118 "trtype": "pcie", 00:07:15.118 "traddr": "0000:00:06.0", 00:07:15.118 "name": "Nvme0" 00:07:15.118 }, 00:07:15.118 "method": "bdev_nvme_attach_controller" 00:07:15.118 }, 00:07:15.118 { 00:07:15.118 "method": "bdev_wait_for_examine" 00:07:15.118 } 00:07:15.118 ] 00:07:15.118 } 00:07:15.118 ] 00:07:15.118 } 00:07:15.118 [2024-12-15 05:47:36.667622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.118 [2024-12-15 05:47:36.698315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.377  [2024-12-15T05:47:37.018Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:15.377 00:07:15.377 05:47:36 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:15.377 05:47:36 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:15.377 05:47:36 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:15.377 05:47:36 -- dd/common.sh@11 -- # local nvme_ref= 00:07:15.377 05:47:36 -- dd/common.sh@12 -- # local size=61440 00:07:15.377 05:47:36 -- dd/common.sh@14 -- # local bs=1048576 00:07:15.377 05:47:36 -- dd/common.sh@15 -- # local count=1 00:07:15.377 05:47:36 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:15.377 05:47:36 -- dd/common.sh@18 -- # gen_conf 00:07:15.377 05:47:36 -- dd/common.sh@31 -- # xtrace_disable 00:07:15.377 05:47:36 -- common/autotest_common.sh@10 -- # set +x 00:07:15.377 [2024-12-15 05:47:37.010953] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:15.377 [2024-12-15 05:47:37.011040] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69470 ] 00:07:15.636 { 00:07:15.636 "subsystems": [ 00:07:15.636 { 00:07:15.636 "subsystem": "bdev", 00:07:15.636 "config": [ 00:07:15.636 { 00:07:15.636 "params": { 00:07:15.636 "trtype": "pcie", 00:07:15.636 "traddr": "0000:00:06.0", 00:07:15.636 "name": "Nvme0" 00:07:15.636 }, 00:07:15.636 "method": "bdev_nvme_attach_controller" 00:07:15.636 }, 00:07:15.636 { 00:07:15.636 "method": "bdev_wait_for_examine" 00:07:15.636 } 00:07:15.636 ] 00:07:15.636 } 00:07:15.636 ] 00:07:15.636 } 00:07:15.636 [2024-12-15 05:47:37.146212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.636 [2024-12-15 05:47:37.177039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.895  [2024-12-15T05:47:37.536Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:15.895 00:07:15.895 05:47:37 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:15.895 05:47:37 -- dd/basic_rw.sh@23 -- # count=15 00:07:15.895 05:47:37 -- dd/basic_rw.sh@24 -- # count=15 00:07:15.895 05:47:37 -- dd/basic_rw.sh@25 -- # size=61440 00:07:15.895 05:47:37 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:15.895 05:47:37 -- dd/common.sh@98 -- # xtrace_disable 00:07:15.895 05:47:37 -- common/autotest_common.sh@10 -- # set +x 00:07:16.463 05:47:37 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:16.463 05:47:37 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:16.463 05:47:37 -- dd/common.sh@31 -- # xtrace_disable 00:07:16.463 05:47:37 -- common/autotest_common.sh@10 -- # set +x 00:07:16.463 [2024-12-15 05:47:37.965505] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:16.463 [2024-12-15 05:47:37.965798] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69488 ] 00:07:16.463 { 00:07:16.463 "subsystems": [ 00:07:16.463 { 00:07:16.463 "subsystem": "bdev", 00:07:16.463 "config": [ 00:07:16.463 { 00:07:16.463 "params": { 00:07:16.463 "trtype": "pcie", 00:07:16.463 "traddr": "0000:00:06.0", 00:07:16.463 "name": "Nvme0" 00:07:16.463 }, 00:07:16.463 "method": "bdev_nvme_attach_controller" 00:07:16.463 }, 00:07:16.463 { 00:07:16.463 "method": "bdev_wait_for_examine" 00:07:16.463 } 00:07:16.463 ] 00:07:16.463 } 00:07:16.463 ] 00:07:16.463 } 00:07:16.722 [2024-12-15 05:47:38.102996] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.722 [2024-12-15 05:47:38.134518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.722  [2024-12-15T05:47:38.623Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:16.982 00:07:16.982 05:47:38 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:16.982 05:47:38 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:16.982 05:47:38 -- dd/common.sh@31 -- # xtrace_disable 00:07:16.982 05:47:38 -- common/autotest_common.sh@10 -- # set +x 00:07:16.982 [2024-12-15 05:47:38.440423] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:16.982 [2024-12-15 05:47:38.440516] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69501 ] 00:07:16.982 { 00:07:16.982 "subsystems": [ 00:07:16.982 { 00:07:16.982 "subsystem": "bdev", 00:07:16.982 "config": [ 00:07:16.982 { 00:07:16.982 "params": { 00:07:16.982 "trtype": "pcie", 00:07:16.982 "traddr": "0000:00:06.0", 00:07:16.982 "name": "Nvme0" 00:07:16.982 }, 00:07:16.982 "method": "bdev_nvme_attach_controller" 00:07:16.982 }, 00:07:16.982 { 00:07:16.982 "method": "bdev_wait_for_examine" 00:07:16.982 } 00:07:16.982 ] 00:07:16.982 } 00:07:16.982 ] 00:07:16.982 } 00:07:16.982 [2024-12-15 05:47:38.575161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.982 [2024-12-15 05:47:38.607380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.241  [2024-12-15T05:47:38.882Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:17.241 00:07:17.241 05:47:38 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:17.241 05:47:38 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:17.241 05:47:38 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:17.241 05:47:38 -- dd/common.sh@11 -- # local nvme_ref= 00:07:17.241 05:47:38 -- dd/common.sh@12 -- # local size=61440 00:07:17.241 05:47:38 -- dd/common.sh@14 -- # local bs=1048576 00:07:17.241 05:47:38 -- dd/common.sh@15 -- # local count=1 00:07:17.241 05:47:38 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:17.241 05:47:38 -- dd/common.sh@18 -- # gen_conf 00:07:17.241 05:47:38 -- dd/common.sh@31 -- # xtrace_disable 00:07:17.241 05:47:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.500 [2024-12-15 05:47:38.918174] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:17.500 [2024-12-15 05:47:38.918485] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69514 ] 00:07:17.500 { 00:07:17.500 "subsystems": [ 00:07:17.500 { 00:07:17.500 "subsystem": "bdev", 00:07:17.500 "config": [ 00:07:17.500 { 00:07:17.500 "params": { 00:07:17.500 "trtype": "pcie", 00:07:17.500 "traddr": "0000:00:06.0", 00:07:17.500 "name": "Nvme0" 00:07:17.500 }, 00:07:17.500 "method": "bdev_nvme_attach_controller" 00:07:17.500 }, 00:07:17.500 { 00:07:17.500 "method": "bdev_wait_for_examine" 00:07:17.500 } 00:07:17.500 ] 00:07:17.500 } 00:07:17.500 ] 00:07:17.500 } 00:07:17.500 [2024-12-15 05:47:39.055245] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.500 [2024-12-15 05:47:39.089182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.760  [2024-12-15T05:47:39.401Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:17.760 00:07:17.760 05:47:39 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:17.760 05:47:39 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:17.760 05:47:39 -- dd/basic_rw.sh@23 -- # count=7 00:07:17.760 05:47:39 -- dd/basic_rw.sh@24 -- # count=7 00:07:17.760 05:47:39 -- dd/basic_rw.sh@25 -- # size=57344 00:07:17.760 05:47:39 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:17.760 05:47:39 -- dd/common.sh@98 -- # xtrace_disable 00:07:17.760 05:47:39 -- common/autotest_common.sh@10 -- # set +x 00:07:18.328 05:47:39 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:18.328 05:47:39 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:18.328 05:47:39 -- dd/common.sh@31 -- # xtrace_disable 00:07:18.328 05:47:39 -- common/autotest_common.sh@10 -- # set +x 00:07:18.328 [2024-12-15 05:47:39.853195] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:18.328 [2024-12-15 05:47:39.853484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69532 ] 00:07:18.328 { 00:07:18.328 "subsystems": [ 00:07:18.328 { 00:07:18.328 "subsystem": "bdev", 00:07:18.328 "config": [ 00:07:18.328 { 00:07:18.328 "params": { 00:07:18.328 "trtype": "pcie", 00:07:18.328 "traddr": "0000:00:06.0", 00:07:18.328 "name": "Nvme0" 00:07:18.328 }, 00:07:18.328 "method": "bdev_nvme_attach_controller" 00:07:18.328 }, 00:07:18.328 { 00:07:18.328 "method": "bdev_wait_for_examine" 00:07:18.328 } 00:07:18.328 ] 00:07:18.328 } 00:07:18.328 ] 00:07:18.328 } 00:07:18.587 [2024-12-15 05:47:39.991439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.587 [2024-12-15 05:47:40.024734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.587  [2024-12-15T05:47:40.487Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:18.846 00:07:18.846 05:47:40 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:18.846 05:47:40 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:18.846 05:47:40 -- dd/common.sh@31 -- # xtrace_disable 00:07:18.846 05:47:40 -- common/autotest_common.sh@10 -- # set +x 00:07:18.846 [2024-12-15 05:47:40.339045] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:18.846 [2024-12-15 05:47:40.339348] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69545 ] 00:07:18.846 { 00:07:18.846 "subsystems": [ 00:07:18.846 { 00:07:18.846 "subsystem": "bdev", 00:07:18.846 "config": [ 00:07:18.846 { 00:07:18.846 "params": { 00:07:18.846 "trtype": "pcie", 00:07:18.846 "traddr": "0000:00:06.0", 00:07:18.846 "name": "Nvme0" 00:07:18.846 }, 00:07:18.846 "method": "bdev_nvme_attach_controller" 00:07:18.846 }, 00:07:18.846 { 00:07:18.846 "method": "bdev_wait_for_examine" 00:07:18.846 } 00:07:18.846 ] 00:07:18.846 } 00:07:18.846 ] 00:07:18.846 } 00:07:18.846 [2024-12-15 05:47:40.475033] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.105 [2024-12-15 05:47:40.506720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.105  [2024-12-15T05:47:41.005Z] Copying: 56/56 [kB] (average 27 MBps) 00:07:19.364 00:07:19.364 05:47:40 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:19.364 05:47:40 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:19.364 05:47:40 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:19.364 05:47:40 -- dd/common.sh@11 -- # local nvme_ref= 00:07:19.364 05:47:40 -- dd/common.sh@12 -- # local size=57344 00:07:19.364 05:47:40 -- dd/common.sh@14 -- # local bs=1048576 00:07:19.364 05:47:40 -- dd/common.sh@15 -- # local count=1 00:07:19.364 05:47:40 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:19.364 05:47:40 -- dd/common.sh@18 -- # gen_conf 00:07:19.364 05:47:40 -- dd/common.sh@31 -- # xtrace_disable 00:07:19.364 05:47:40 -- common/autotest_common.sh@10 -- # set +x 00:07:19.364 [2024-12-15 05:47:40.827170] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:19.364 [2024-12-15 05:47:40.827292] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69557 ] 00:07:19.364 { 00:07:19.364 "subsystems": [ 00:07:19.364 { 00:07:19.364 "subsystem": "bdev", 00:07:19.364 "config": [ 00:07:19.364 { 00:07:19.364 "params": { 00:07:19.364 "trtype": "pcie", 00:07:19.364 "traddr": "0000:00:06.0", 00:07:19.364 "name": "Nvme0" 00:07:19.364 }, 00:07:19.364 "method": "bdev_nvme_attach_controller" 00:07:19.364 }, 00:07:19.364 { 00:07:19.364 "method": "bdev_wait_for_examine" 00:07:19.364 } 00:07:19.364 ] 00:07:19.365 } 00:07:19.365 ] 00:07:19.365 } 00:07:19.365 [2024-12-15 05:47:40.962931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.365 [2024-12-15 05:47:40.994091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.625  [2024-12-15T05:47:41.266Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:19.625 00:07:19.625 05:47:41 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:19.625 05:47:41 -- dd/basic_rw.sh@23 -- # count=7 00:07:19.625 05:47:41 -- dd/basic_rw.sh@24 -- # count=7 00:07:19.625 05:47:41 -- dd/basic_rw.sh@25 -- # size=57344 00:07:19.625 05:47:41 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:19.625 05:47:41 -- dd/common.sh@98 -- # xtrace_disable 00:07:19.625 05:47:41 -- common/autotest_common.sh@10 -- # set +x 00:07:20.194 05:47:41 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:20.194 05:47:41 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:20.194 05:47:41 -- dd/common.sh@31 -- # xtrace_disable 00:07:20.194 05:47:41 -- common/autotest_common.sh@10 -- # set +x 00:07:20.194 [2024-12-15 05:47:41.775915] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:20.194 [2024-12-15 05:47:41.776041] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69571 ] 00:07:20.194 { 00:07:20.194 "subsystems": [ 00:07:20.194 { 00:07:20.194 "subsystem": "bdev", 00:07:20.194 "config": [ 00:07:20.194 { 00:07:20.194 "params": { 00:07:20.194 "trtype": "pcie", 00:07:20.194 "traddr": "0000:00:06.0", 00:07:20.194 "name": "Nvme0" 00:07:20.194 }, 00:07:20.194 "method": "bdev_nvme_attach_controller" 00:07:20.194 }, 00:07:20.194 { 00:07:20.194 "method": "bdev_wait_for_examine" 00:07:20.194 } 00:07:20.194 ] 00:07:20.194 } 00:07:20.194 ] 00:07:20.194 } 00:07:20.453 [2024-12-15 05:47:41.919198] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.453 [2024-12-15 05:47:41.950090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.453  [2024-12-15T05:47:42.353Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:20.712 00:07:20.712 05:47:42 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:20.712 05:47:42 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:20.712 05:47:42 -- dd/common.sh@31 -- # xtrace_disable 00:07:20.712 05:47:42 -- common/autotest_common.sh@10 -- # set +x 00:07:20.712 [2024-12-15 05:47:42.263377] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:20.712 [2024-12-15 05:47:42.263478] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69589 ] 00:07:20.712 { 00:07:20.712 "subsystems": [ 00:07:20.712 { 00:07:20.712 "subsystem": "bdev", 00:07:20.712 "config": [ 00:07:20.712 { 00:07:20.712 "params": { 00:07:20.712 "trtype": "pcie", 00:07:20.712 "traddr": "0000:00:06.0", 00:07:20.712 "name": "Nvme0" 00:07:20.712 }, 00:07:20.712 "method": "bdev_nvme_attach_controller" 00:07:20.712 }, 00:07:20.712 { 00:07:20.712 "method": "bdev_wait_for_examine" 00:07:20.712 } 00:07:20.712 ] 00:07:20.712 } 00:07:20.712 ] 00:07:20.712 } 00:07:20.971 [2024-12-15 05:47:42.398715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.971 [2024-12-15 05:47:42.429130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.971  [2024-12-15T05:47:42.871Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:21.230 00:07:21.230 05:47:42 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:21.230 05:47:42 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:21.230 05:47:42 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:21.230 05:47:42 -- dd/common.sh@11 -- # local nvme_ref= 00:07:21.230 05:47:42 -- dd/common.sh@12 -- # local size=57344 00:07:21.230 05:47:42 -- dd/common.sh@14 -- # local bs=1048576 00:07:21.230 05:47:42 -- dd/common.sh@15 -- # local count=1 00:07:21.230 05:47:42 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:21.230 05:47:42 -- dd/common.sh@18 -- # gen_conf 00:07:21.230 05:47:42 -- dd/common.sh@31 -- # xtrace_disable 00:07:21.230 05:47:42 -- common/autotest_common.sh@10 -- # set +x 00:07:21.230 [2024-12-15 05:47:42.740953] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:21.230 [2024-12-15 05:47:42.741066] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69597 ] 00:07:21.230 { 00:07:21.230 "subsystems": [ 00:07:21.230 { 00:07:21.230 "subsystem": "bdev", 00:07:21.230 "config": [ 00:07:21.230 { 00:07:21.230 "params": { 00:07:21.230 "trtype": "pcie", 00:07:21.230 "traddr": "0000:00:06.0", 00:07:21.230 "name": "Nvme0" 00:07:21.230 }, 00:07:21.230 "method": "bdev_nvme_attach_controller" 00:07:21.230 }, 00:07:21.230 { 00:07:21.230 "method": "bdev_wait_for_examine" 00:07:21.230 } 00:07:21.230 ] 00:07:21.230 } 00:07:21.230 ] 00:07:21.230 } 00:07:21.489 [2024-12-15 05:47:42.877016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.489 [2024-12-15 05:47:42.909999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.489  [2024-12-15T05:47:43.389Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:21.748 00:07:21.748 05:47:43 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:21.748 05:47:43 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:21.748 05:47:43 -- dd/basic_rw.sh@23 -- # count=3 00:07:21.748 05:47:43 -- dd/basic_rw.sh@24 -- # count=3 00:07:21.748 05:47:43 -- dd/basic_rw.sh@25 -- # size=49152 00:07:21.748 05:47:43 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:21.748 05:47:43 -- dd/common.sh@98 -- # xtrace_disable 00:07:21.748 05:47:43 -- common/autotest_common.sh@10 -- # set +x 00:07:22.007 05:47:43 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:22.007 05:47:43 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:22.007 05:47:43 -- dd/common.sh@31 -- # xtrace_disable 00:07:22.007 05:47:43 -- common/autotest_common.sh@10 -- # set +x 00:07:22.007 [2024-12-15 05:47:43.601651] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:22.007 [2024-12-15 05:47:43.601751] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69615 ] 00:07:22.007 { 00:07:22.007 "subsystems": [ 00:07:22.007 { 00:07:22.007 "subsystem": "bdev", 00:07:22.007 "config": [ 00:07:22.007 { 00:07:22.007 "params": { 00:07:22.007 "trtype": "pcie", 00:07:22.007 "traddr": "0000:00:06.0", 00:07:22.007 "name": "Nvme0" 00:07:22.007 }, 00:07:22.007 "method": "bdev_nvme_attach_controller" 00:07:22.007 }, 00:07:22.007 { 00:07:22.007 "method": "bdev_wait_for_examine" 00:07:22.007 } 00:07:22.007 ] 00:07:22.007 } 00:07:22.007 ] 00:07:22.007 } 00:07:22.267 [2024-12-15 05:47:43.738295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.267 [2024-12-15 05:47:43.769146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.267  [2024-12-15T05:47:44.167Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:22.526 00:07:22.526 05:47:44 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:22.526 05:47:44 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:22.526 05:47:44 -- dd/common.sh@31 -- # xtrace_disable 00:07:22.526 05:47:44 -- common/autotest_common.sh@10 -- # set +x 00:07:22.526 [2024-12-15 05:47:44.090207] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:22.526 [2024-12-15 05:47:44.090501] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69627 ] 00:07:22.526 { 00:07:22.526 "subsystems": [ 00:07:22.526 { 00:07:22.526 "subsystem": "bdev", 00:07:22.526 "config": [ 00:07:22.526 { 00:07:22.526 "params": { 00:07:22.526 "trtype": "pcie", 00:07:22.526 "traddr": "0000:00:06.0", 00:07:22.526 "name": "Nvme0" 00:07:22.526 }, 00:07:22.526 "method": "bdev_nvme_attach_controller" 00:07:22.526 }, 00:07:22.526 { 00:07:22.526 "method": "bdev_wait_for_examine" 00:07:22.526 } 00:07:22.526 ] 00:07:22.526 } 00:07:22.526 ] 00:07:22.526 } 00:07:22.785 [2024-12-15 05:47:44.227822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.785 [2024-12-15 05:47:44.258343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.785  [2024-12-15T05:47:44.685Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:23.044 00:07:23.044 05:47:44 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:23.044 05:47:44 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:23.044 05:47:44 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:23.044 05:47:44 -- dd/common.sh@11 -- # local nvme_ref= 00:07:23.044 05:47:44 -- dd/common.sh@12 -- # local size=49152 00:07:23.044 05:47:44 -- dd/common.sh@14 -- # local bs=1048576 00:07:23.044 05:47:44 -- dd/common.sh@15 -- # local count=1 00:07:23.044 05:47:44 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:23.044 05:47:44 -- dd/common.sh@18 -- # gen_conf 00:07:23.044 05:47:44 -- dd/common.sh@31 -- # xtrace_disable 00:07:23.044 05:47:44 -- common/autotest_common.sh@10 -- # set +x 00:07:23.044 [2024-12-15 05:47:44.569348] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:23.044 [2024-12-15 05:47:44.569805] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69641 ] 00:07:23.044 { 00:07:23.044 "subsystems": [ 00:07:23.044 { 00:07:23.044 "subsystem": "bdev", 00:07:23.044 "config": [ 00:07:23.044 { 00:07:23.044 "params": { 00:07:23.044 "trtype": "pcie", 00:07:23.044 "traddr": "0000:00:06.0", 00:07:23.044 "name": "Nvme0" 00:07:23.044 }, 00:07:23.044 "method": "bdev_nvme_attach_controller" 00:07:23.044 }, 00:07:23.044 { 00:07:23.044 "method": "bdev_wait_for_examine" 00:07:23.044 } 00:07:23.044 ] 00:07:23.044 } 00:07:23.044 ] 00:07:23.044 } 00:07:23.303 [2024-12-15 05:47:44.707363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.303 [2024-12-15 05:47:44.738254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.303  [2024-12-15T05:47:45.204Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:23.563 00:07:23.563 05:47:45 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:23.563 05:47:45 -- dd/basic_rw.sh@23 -- # count=3 00:07:23.563 05:47:45 -- dd/basic_rw.sh@24 -- # count=3 00:07:23.563 05:47:45 -- dd/basic_rw.sh@25 -- # size=49152 00:07:23.563 05:47:45 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:23.563 05:47:45 -- dd/common.sh@98 -- # xtrace_disable 00:07:23.563 05:47:45 -- common/autotest_common.sh@10 -- # set +x 00:07:23.821 05:47:45 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:23.821 05:47:45 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:23.821 05:47:45 -- dd/common.sh@31 -- # xtrace_disable 00:07:23.821 05:47:45 -- common/autotest_common.sh@10 -- # set +x 00:07:23.821 [2024-12-15 05:47:45.445664] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:23.821 [2024-12-15 05:47:45.445761] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69659 ] 00:07:23.821 { 00:07:23.821 "subsystems": [ 00:07:23.821 { 00:07:23.821 "subsystem": "bdev", 00:07:23.821 "config": [ 00:07:23.821 { 00:07:23.821 "params": { 00:07:23.821 "trtype": "pcie", 00:07:23.821 "traddr": "0000:00:06.0", 00:07:23.821 "name": "Nvme0" 00:07:23.821 }, 00:07:23.821 "method": "bdev_nvme_attach_controller" 00:07:23.821 }, 00:07:23.821 { 00:07:23.821 "method": "bdev_wait_for_examine" 00:07:23.821 } 00:07:23.821 ] 00:07:23.821 } 00:07:23.821 ] 00:07:23.821 } 00:07:24.079 [2024-12-15 05:47:45.583013] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.079 [2024-12-15 05:47:45.613701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.079  [2024-12-15T05:47:45.979Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:24.338 00:07:24.338 05:47:45 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:24.338 05:47:45 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:24.338 05:47:45 -- dd/common.sh@31 -- # xtrace_disable 00:07:24.338 05:47:45 -- common/autotest_common.sh@10 -- # set +x 00:07:24.338 [2024-12-15 05:47:45.922581] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:24.338 [2024-12-15 05:47:45.922751] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69666 ] 00:07:24.338 { 00:07:24.338 "subsystems": [ 00:07:24.338 { 00:07:24.338 "subsystem": "bdev", 00:07:24.338 "config": [ 00:07:24.338 { 00:07:24.338 "params": { 00:07:24.338 "trtype": "pcie", 00:07:24.338 "traddr": "0000:00:06.0", 00:07:24.338 "name": "Nvme0" 00:07:24.338 }, 00:07:24.338 "method": "bdev_nvme_attach_controller" 00:07:24.338 }, 00:07:24.338 { 00:07:24.338 "method": "bdev_wait_for_examine" 00:07:24.338 } 00:07:24.338 ] 00:07:24.338 } 00:07:24.338 ] 00:07:24.338 } 00:07:24.597 [2024-12-15 05:47:46.069171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.597 [2024-12-15 05:47:46.100814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.597  [2024-12-15T05:47:46.497Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:24.856 00:07:24.856 05:47:46 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:24.856 05:47:46 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:24.856 05:47:46 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:24.856 05:47:46 -- dd/common.sh@11 -- # local nvme_ref= 00:07:24.856 05:47:46 -- dd/common.sh@12 -- # local size=49152 00:07:24.856 05:47:46 -- dd/common.sh@14 -- # local bs=1048576 00:07:24.856 05:47:46 -- dd/common.sh@15 -- # local count=1 00:07:24.856 05:47:46 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:24.856 05:47:46 -- dd/common.sh@18 -- # gen_conf 00:07:24.856 05:47:46 -- dd/common.sh@31 -- # xtrace_disable 00:07:24.856 05:47:46 -- common/autotest_common.sh@10 -- # set +x 00:07:24.856 [2024-12-15 05:47:46.412506] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:24.856 [2024-12-15 05:47:46.412768] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69685 ] 00:07:24.856 { 00:07:24.856 "subsystems": [ 00:07:24.856 { 00:07:24.856 "subsystem": "bdev", 00:07:24.856 "config": [ 00:07:24.856 { 00:07:24.856 "params": { 00:07:24.856 "trtype": "pcie", 00:07:24.856 "traddr": "0000:00:06.0", 00:07:24.856 "name": "Nvme0" 00:07:24.856 }, 00:07:24.856 "method": "bdev_nvme_attach_controller" 00:07:24.856 }, 00:07:24.856 { 00:07:24.856 "method": "bdev_wait_for_examine" 00:07:24.856 } 00:07:24.856 ] 00:07:24.856 } 00:07:24.856 ] 00:07:24.856 } 00:07:25.116 [2024-12-15 05:47:46.548455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.116 [2024-12-15 05:47:46.578768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.116  [2024-12-15T05:47:47.017Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:25.376 00:07:25.376 00:07:25.376 real 0m11.358s 00:07:25.376 user 0m8.227s 00:07:25.376 sys 0m2.085s 00:07:25.376 05:47:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:25.376 ************************************ 00:07:25.376 END TEST dd_rw 00:07:25.376 ************************************ 00:07:25.376 05:47:46 -- common/autotest_common.sh@10 -- # set +x 00:07:25.376 05:47:46 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:25.376 05:47:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:25.376 05:47:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.376 05:47:46 -- common/autotest_common.sh@10 -- # set +x 00:07:25.376 ************************************ 00:07:25.376 START TEST dd_rw_offset 00:07:25.376 ************************************ 00:07:25.376 05:47:46 -- common/autotest_common.sh@1114 -- # basic_offset 00:07:25.376 05:47:46 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:25.376 05:47:46 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:25.376 05:47:46 -- dd/common.sh@98 -- # xtrace_disable 00:07:25.376 05:47:46 -- common/autotest_common.sh@10 -- # set +x 00:07:25.376 05:47:46 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:25.376 05:47:46 -- dd/basic_rw.sh@56 -- # data=ebuh6rk8lee0xj4ymgpre4svuq9rqjrjphg3n3t1jrwl77kf1vrwq7vzhpjivof5d0dadw8qoauk6yoqx2qj4dsa5xwzypjauveiugr607911w1fnr05tyjsu2q5j39dm60jzaptbp3h70avlt4fs8kysx1xf66ofoiw5dv6525gqkuuxk02pq00upv0u7635ydo68wt7ruoi37oqoh19gfq26yq6wvct38o04k4bxbagniath8693gps52jar9npxg1xa1zv6dtpsc08uqe4poacnwndbbtoq0mknaapjpynp4eg0m885e356kblc0g9kep15yxufynckieg22zvad3fyzb5rmvtw6kd3az11vwsoruflgiu4znq6o6kqynoqg62fhylteldvbzgp8dkujqumkvpg1ar06ux68gns21wjlwk5jgegm7jtwbig9t1rk61ix2708ecmx4smxxhkxl1dqy4esrde0q9av7i0lwftp3jhvydukx74i1cy86jsp775s4rj2o33ixm41qmt2hlc90ifbr6kffoqfazly1lk5tm4s7f86eqvggfoptrtd9r3qvk97ejxx09rfdbl9z9umi6gbl2xquevx4glqndiy3caxzmfgwbcm7vaeszkdvlyba6olwe7q9z8pue9ksa3opodc695pqygw06mmvud3e1z6zrlllw17zkjrkre275a33nqm8rxp8levbpb9o7t6zg5zqj273lsz0rbueylij0gncloyu2qc4x6dq4e5hjaoeqv3exuoboftdw6p4ru7xfwvuuhh3j71qcz8ahqb5dft4e6ke4t31p700o6ywgjev6pm8ttcrdu4esiiig3a4cwrxf1l9uyzf2be24rrqqfj5o7zbmclvbi2mbtwy3lu4cdq9haccnsu04fml95gijtq8hlk8zvffe5u2d1bq4c71h7sn454tydfd4j1sj56lm5us3k1o17svp436905fgw9tp9e9vuy50jej2lh58eyf84lof4r6navxjikk72zgvifecb5gliuxo2s39u7ucou104gt0rm7ng4edfw9i1fbrriudaifrlc8shd9ld5qih85e27vru6b8g56dhmhpaksdwy9qbcg4htzmxtu1l1tzxhnxrinvhe6il3iqhrli7eqhzbs7igwtcydg4tf3mp3zx9ij9qpky1pl5ecc1jbx31tdr325f8qwc17whrf237s7fwwf4bii1fqt5pwhnduf4g95mafbfvplamyz4p1617cm67orc81s5t8bo7athtgq8516cklv4pqhiv0d4lswrw5t1jcvxmkwfi4zl78j4jwduqx193fg5l4jnl9f7t0pt8q8bxlefpxg1psrpc69853p8zuhgdh93z620z0r9mchn2e7o4qf9jqef3vyqxag75oz2582x4jzoodi08r66wbfh1petek6ykwdhrv8e0cpu5512ghew3ocwptpz6t8nnuej8qql74yf9shv9hdria0o8u98va7gm9f2n4o842we709t8cz3gzbrb2voqpplw573wdn6wmbr7gveruy7dpz5fypeezem326xr1nf6at7a0k94ozafk6mrh5hnm7ufayv4utv1b3bdp8xnr0vsvz9osbuk62sseqyvk4h6759rhuvekq9zv6o6sscn8muqler1pr9mx5rnjk3nlhtiwm70visl7urqe4hj9e2rqxp6kmc5bmpz8q29814kmi616xe7uh0tw983oaihs0p5t2ywhdtbkfvz5t2nq37oydzwazj3f4e1uizjcymb8d27i96f6fmg73b6tr5g1l52x94rnld555b5fuvhwfmtdmw4qmlv0tv3o23s7dkhlgq6bkjs6xx0d44z2rmkuocy6m36mn9ahbwa7jttci0q90vljt88yvky43fvr8eb0y5aqq0gnnr8j4kwnsokqapzv75elm2afi846rfca0b7gqwtdv0lu4a0qdxv89r4j3yh5f1r4mw6t3vnl8dfd3fl76wo8ubjokz9ib4q9g7eh9oqttesp30b3rw927bax6rcg5toj27drnl5twjs0kejzhulbsz0eguul09ukw210yfeomey2j96j4vlkbacrfo4nhh4gdic2e8knakh5e4uj7rin9zvq4f3tsgbh4bi64elv0nxe1g1zmmjrau6jmhajiposg0fg2lzf5y04e47cf64dkhfosulas5c700nppk0j6kx6s6r5xkuxh4k2rhh2jq58aos93ee5anl9ip0oz6gxy41egox50n8vtkvkrwvtzbvrh0of0daqqctuvr9p7xi68muf62xhybbc28x5wwfp6aweafmkuqtu50fo0g8ihk2pws2zs5igmxf5oe8t2jtrabhebz51p942jphjoivk8m8eg42pvftqiintxxrn03gy1yk7qoqy9iwhf9djllavgn800a9fg4uo10i31cl51z6whq9urfeqsh6qw676ptij2uck69ztqklmhftplah66pw64cojlzzom79j9qaz7xdxf5f257nhwbsik20ohju802248mq4ph0z265vgm4al92gtowhx8d65oa50cayutnpku2ugo3bl0pg8jp07nsk2dz2sc26i3cbw8rrn7s66epsnniklb2mh2tsxi0qupuz9adhextd72paitejekiq1ihhdwwwk49z573oca0yi8y5edwtfpd7onkmwqydawd0femw61dawf0pcpz0dynjp7njg6i5rhr3t8jaqwrgp6qdtmlr4x0as7t6x8ikl57hlukb96e0c52tl220bv6oulh3djj2wba2bqs6iyp0r9mip82reos74c189ww2mm72zx9krdusqbftym53y7bbm8ufumntiqzdfilp4da6bdq85xmyejajvcmzvt7ni9pu3wzfic0e4tw8asxc2bq59hygh36m93l1oginh465fmtuiayqmw7j59ib52s904d9go12rx5cbwanxvey7cnq5e73er12k2k6dzufxgc9xf3qd59w3dpc8nuy4nsio4xvml91ehqra1dnfahosi5ru9nu1wbmvoc3uj64n5a2lyyfczm6y4hwcvk1ej3d4wu6krurhktpwb83vhhc2j1qxloha8zm5bk2p49g47kml59oe6qli5o3hitx285fqmwuolyym40dnusj4w3kv6o7zpdq7xkrsldsm07turk7fbs6u2jwd7mnr3ob5j5xkedq3n2yahg5n77l3325vhavxus7uonjika3fkb88rzzqsok4s0d9h9d0bxtaw3nvb5lf104h70scyg4gi6wwb48qnsw0bir3vnnfnkzsygk2rplvl8ar13dl3krm13v7imxn5m5eobqu85ecmjo8rey8oc2h7woh8r3uoug4u6cyrd3menc652r2l9dj4nllbd8q0i8f78cx52jj3106wnjm9h5daeb1xmoa3vdm01zhp6ylf1vm515qegx2vj0xsdtdu12vh03gv3l2b6i3vhv0cvfbcrl2nkbnvhx37nfezv9096xuckaog3lgoygl8toz4koknue8ig185flvjiwyj5xr1bdkxnkkrjbnnq3zl1wj49xrso8vaoubjhczjsi5f62rvfecmxmwgh2r2u68qu72825kx3frt5jqf61yw4nmba31g6jfkjsm7bojv600zzvomhyyb4bcvd47liilo73ch3jjvgg4sc6lnh3crycqub8xwyjmwovvhpbcdwyad5pwatdtxsh7ctl2lao0z9yjmt51y7539u2xsp9fxmkv3oahso3p0ls0evz5uw7g3z5at6s4kelqp3oe1cri9xxtnmzwchqgxllc1bpjzhq6yfjx8wtoo4hj9jxthuuu8m23i9m9fags47rtuv6xrrw0ehsy7ngs42c61r16fo4en8rw7ugug34y6ism01hsbesjm6te6z4snsisdlw3f3vuyzn3oxfz1pyf651u9v5ujkypp6qmf7g8y287gn67pcunubavkyx8mvn1se1ci56w6m7z9w7uo2ql43dfq507e8vaf5yxgwt5j59vcx69im24qqwie7gzvqf82pu 00:07:25.376 05:47:46 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:25.376 05:47:46 -- dd/basic_rw.sh@59 -- # gen_conf 00:07:25.376 05:47:46 -- dd/common.sh@31 -- # xtrace_disable 00:07:25.376 05:47:46 -- common/autotest_common.sh@10 -- # set +x 00:07:25.376 [2024-12-15 05:47:46.980494] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:25.376 [2024-12-15 05:47:46.981204] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69709 ] 00:07:25.376 { 00:07:25.376 "subsystems": [ 00:07:25.376 { 00:07:25.376 "subsystem": "bdev", 00:07:25.376 "config": [ 00:07:25.376 { 00:07:25.376 "params": { 00:07:25.376 "trtype": "pcie", 00:07:25.376 "traddr": "0000:00:06.0", 00:07:25.376 "name": "Nvme0" 00:07:25.376 }, 00:07:25.376 "method": "bdev_nvme_attach_controller" 00:07:25.376 }, 00:07:25.376 { 00:07:25.376 "method": "bdev_wait_for_examine" 00:07:25.376 } 00:07:25.376 ] 00:07:25.376 } 00:07:25.376 ] 00:07:25.376 } 00:07:25.635 [2024-12-15 05:47:47.119450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.635 [2024-12-15 05:47:47.151575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.635  [2024-12-15T05:47:47.552Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:25.911 00:07:25.911 05:47:47 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:25.911 05:47:47 -- dd/basic_rw.sh@65 -- # gen_conf 00:07:25.911 05:47:47 -- dd/common.sh@31 -- # xtrace_disable 00:07:25.911 05:47:47 -- common/autotest_common.sh@10 -- # set +x 00:07:25.911 [2024-12-15 05:47:47.458734] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:25.912 [2024-12-15 05:47:47.458838] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69727 ] 00:07:25.912 { 00:07:25.912 "subsystems": [ 00:07:25.912 { 00:07:25.912 "subsystem": "bdev", 00:07:25.912 "config": [ 00:07:25.912 { 00:07:25.912 "params": { 00:07:25.912 "trtype": "pcie", 00:07:25.912 "traddr": "0000:00:06.0", 00:07:25.912 "name": "Nvme0" 00:07:25.912 }, 00:07:25.912 "method": "bdev_nvme_attach_controller" 00:07:25.912 }, 00:07:25.912 { 00:07:25.912 "method": "bdev_wait_for_examine" 00:07:25.912 } 00:07:25.912 ] 00:07:25.912 } 00:07:25.912 ] 00:07:25.912 } 00:07:26.194 [2024-12-15 05:47:47.596146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.194 [2024-12-15 05:47:47.627467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.194  [2024-12-15T05:47:48.095Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:26.454 00:07:26.454 05:47:47 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:26.454 ************************************ 00:07:26.454 END TEST dd_rw_offset 00:07:26.454 ************************************ 00:07:26.455 05:47:47 -- dd/basic_rw.sh@72 -- # [[ ebuh6rk8lee0xj4ymgpre4svuq9rqjrjphg3n3t1jrwl77kf1vrwq7vzhpjivof5d0dadw8qoauk6yoqx2qj4dsa5xwzypjauveiugr607911w1fnr05tyjsu2q5j39dm60jzaptbp3h70avlt4fs8kysx1xf66ofoiw5dv6525gqkuuxk02pq00upv0u7635ydo68wt7ruoi37oqoh19gfq26yq6wvct38o04k4bxbagniath8693gps52jar9npxg1xa1zv6dtpsc08uqe4poacnwndbbtoq0mknaapjpynp4eg0m885e356kblc0g9kep15yxufynckieg22zvad3fyzb5rmvtw6kd3az11vwsoruflgiu4znq6o6kqynoqg62fhylteldvbzgp8dkujqumkvpg1ar06ux68gns21wjlwk5jgegm7jtwbig9t1rk61ix2708ecmx4smxxhkxl1dqy4esrde0q9av7i0lwftp3jhvydukx74i1cy86jsp775s4rj2o33ixm41qmt2hlc90ifbr6kffoqfazly1lk5tm4s7f86eqvggfoptrtd9r3qvk97ejxx09rfdbl9z9umi6gbl2xquevx4glqndiy3caxzmfgwbcm7vaeszkdvlyba6olwe7q9z8pue9ksa3opodc695pqygw06mmvud3e1z6zrlllw17zkjrkre275a33nqm8rxp8levbpb9o7t6zg5zqj273lsz0rbueylij0gncloyu2qc4x6dq4e5hjaoeqv3exuoboftdw6p4ru7xfwvuuhh3j71qcz8ahqb5dft4e6ke4t31p700o6ywgjev6pm8ttcrdu4esiiig3a4cwrxf1l9uyzf2be24rrqqfj5o7zbmclvbi2mbtwy3lu4cdq9haccnsu04fml95gijtq8hlk8zvffe5u2d1bq4c71h7sn454tydfd4j1sj56lm5us3k1o17svp436905fgw9tp9e9vuy50jej2lh58eyf84lof4r6navxjikk72zgvifecb5gliuxo2s39u7ucou104gt0rm7ng4edfw9i1fbrriudaifrlc8shd9ld5qih85e27vru6b8g56dhmhpaksdwy9qbcg4htzmxtu1l1tzxhnxrinvhe6il3iqhrli7eqhzbs7igwtcydg4tf3mp3zx9ij9qpky1pl5ecc1jbx31tdr325f8qwc17whrf237s7fwwf4bii1fqt5pwhnduf4g95mafbfvplamyz4p1617cm67orc81s5t8bo7athtgq8516cklv4pqhiv0d4lswrw5t1jcvxmkwfi4zl78j4jwduqx193fg5l4jnl9f7t0pt8q8bxlefpxg1psrpc69853p8zuhgdh93z620z0r9mchn2e7o4qf9jqef3vyqxag75oz2582x4jzoodi08r66wbfh1petek6ykwdhrv8e0cpu5512ghew3ocwptpz6t8nnuej8qql74yf9shv9hdria0o8u98va7gm9f2n4o842we709t8cz3gzbrb2voqpplw573wdn6wmbr7gveruy7dpz5fypeezem326xr1nf6at7a0k94ozafk6mrh5hnm7ufayv4utv1b3bdp8xnr0vsvz9osbuk62sseqyvk4h6759rhuvekq9zv6o6sscn8muqler1pr9mx5rnjk3nlhtiwm70visl7urqe4hj9e2rqxp6kmc5bmpz8q29814kmi616xe7uh0tw983oaihs0p5t2ywhdtbkfvz5t2nq37oydzwazj3f4e1uizjcymb8d27i96f6fmg73b6tr5g1l52x94rnld555b5fuvhwfmtdmw4qmlv0tv3o23s7dkhlgq6bkjs6xx0d44z2rmkuocy6m36mn9ahbwa7jttci0q90vljt88yvky43fvr8eb0y5aqq0gnnr8j4kwnsokqapzv75elm2afi846rfca0b7gqwtdv0lu4a0qdxv89r4j3yh5f1r4mw6t3vnl8dfd3fl76wo8ubjokz9ib4q9g7eh9oqttesp30b3rw927bax6rcg5toj27drnl5twjs0kejzhulbsz0eguul09ukw210yfeomey2j96j4vlkbacrfo4nhh4gdic2e8knakh5e4uj7rin9zvq4f3tsgbh4bi64elv0nxe1g1zmmjrau6jmhajiposg0fg2lzf5y04e47cf64dkhfosulas5c700nppk0j6kx6s6r5xkuxh4k2rhh2jq58aos93ee5anl9ip0oz6gxy41egox50n8vtkvkrwvtzbvrh0of0daqqctuvr9p7xi68muf62xhybbc28x5wwfp6aweafmkuqtu50fo0g8ihk2pws2zs5igmxf5oe8t2jtrabhebz51p942jphjoivk8m8eg42pvftqiintxxrn03gy1yk7qoqy9iwhf9djllavgn800a9fg4uo10i31cl51z6whq9urfeqsh6qw676ptij2uck69ztqklmhftplah66pw64cojlzzom79j9qaz7xdxf5f257nhwbsik20ohju802248mq4ph0z265vgm4al92gtowhx8d65oa50cayutnpku2ugo3bl0pg8jp07nsk2dz2sc26i3cbw8rrn7s66epsnniklb2mh2tsxi0qupuz9adhextd72paitejekiq1ihhdwwwk49z573oca0yi8y5edwtfpd7onkmwqydawd0femw61dawf0pcpz0dynjp7njg6i5rhr3t8jaqwrgp6qdtmlr4x0as7t6x8ikl57hlukb96e0c52tl220bv6oulh3djj2wba2bqs6iyp0r9mip82reos74c189ww2mm72zx9krdusqbftym53y7bbm8ufumntiqzdfilp4da6bdq85xmyejajvcmzvt7ni9pu3wzfic0e4tw8asxc2bq59hygh36m93l1oginh465fmtuiayqmw7j59ib52s904d9go12rx5cbwanxvey7cnq5e73er12k2k6dzufxgc9xf3qd59w3dpc8nuy4nsio4xvml91ehqra1dnfahosi5ru9nu1wbmvoc3uj64n5a2lyyfczm6y4hwcvk1ej3d4wu6krurhktpwb83vhhc2j1qxloha8zm5bk2p49g47kml59oe6qli5o3hitx285fqmwuolyym40dnusj4w3kv6o7zpdq7xkrsldsm07turk7fbs6u2jwd7mnr3ob5j5xkedq3n2yahg5n77l3325vhavxus7uonjika3fkb88rzzqsok4s0d9h9d0bxtaw3nvb5lf104h70scyg4gi6wwb48qnsw0bir3vnnfnkzsygk2rplvl8ar13dl3krm13v7imxn5m5eobqu85ecmjo8rey8oc2h7woh8r3uoug4u6cyrd3menc652r2l9dj4nllbd8q0i8f78cx52jj3106wnjm9h5daeb1xmoa3vdm01zhp6ylf1vm515qegx2vj0xsdtdu12vh03gv3l2b6i3vhv0cvfbcrl2nkbnvhx37nfezv9096xuckaog3lgoygl8toz4koknue8ig185flvjiwyj5xr1bdkxnkkrjbnnq3zl1wj49xrso8vaoubjhczjsi5f62rvfecmxmwgh2r2u68qu72825kx3frt5jqf61yw4nmba31g6jfkjsm7bojv600zzvomhyyb4bcvd47liilo73ch3jjvgg4sc6lnh3crycqub8xwyjmwovvhpbcdwyad5pwatdtxsh7ctl2lao0z9yjmt51y7539u2xsp9fxmkv3oahso3p0ls0evz5uw7g3z5at6s4kelqp3oe1cri9xxtnmzwchqgxllc1bpjzhq6yfjx8wtoo4hj9jxthuuu8m23i9m9fags47rtuv6xrrw0ehsy7ngs42c61r16fo4en8rw7ugug34y6ism01hsbesjm6te6z4snsisdlw3f3vuyzn3oxfz1pyf651u9v5ujkypp6qmf7g8y287gn67pcunubavkyx8mvn1se1ci56w6m7z9w7uo2ql43dfq507e8vaf5yxgwt5j59vcx69im24qqwie7gzvqf82pu == \e\b\u\h\6\r\k\8\l\e\e\0\x\j\4\y\m\g\p\r\e\4\s\v\u\q\9\r\q\j\r\j\p\h\g\3\n\3\t\1\j\r\w\l\7\7\k\f\1\v\r\w\q\7\v\z\h\p\j\i\v\o\f\5\d\0\d\a\d\w\8\q\o\a\u\k\6\y\o\q\x\2\q\j\4\d\s\a\5\x\w\z\y\p\j\a\u\v\e\i\u\g\r\6\0\7\9\1\1\w\1\f\n\r\0\5\t\y\j\s\u\2\q\5\j\3\9\d\m\6\0\j\z\a\p\t\b\p\3\h\7\0\a\v\l\t\4\f\s\8\k\y\s\x\1\x\f\6\6\o\f\o\i\w\5\d\v\6\5\2\5\g\q\k\u\u\x\k\0\2\p\q\0\0\u\p\v\0\u\7\6\3\5\y\d\o\6\8\w\t\7\r\u\o\i\3\7\o\q\o\h\1\9\g\f\q\2\6\y\q\6\w\v\c\t\3\8\o\0\4\k\4\b\x\b\a\g\n\i\a\t\h\8\6\9\3\g\p\s\5\2\j\a\r\9\n\p\x\g\1\x\a\1\z\v\6\d\t\p\s\c\0\8\u\q\e\4\p\o\a\c\n\w\n\d\b\b\t\o\q\0\m\k\n\a\a\p\j\p\y\n\p\4\e\g\0\m\8\8\5\e\3\5\6\k\b\l\c\0\g\9\k\e\p\1\5\y\x\u\f\y\n\c\k\i\e\g\2\2\z\v\a\d\3\f\y\z\b\5\r\m\v\t\w\6\k\d\3\a\z\1\1\v\w\s\o\r\u\f\l\g\i\u\4\z\n\q\6\o\6\k\q\y\n\o\q\g\6\2\f\h\y\l\t\e\l\d\v\b\z\g\p\8\d\k\u\j\q\u\m\k\v\p\g\1\a\r\0\6\u\x\6\8\g\n\s\2\1\w\j\l\w\k\5\j\g\e\g\m\7\j\t\w\b\i\g\9\t\1\r\k\6\1\i\x\2\7\0\8\e\c\m\x\4\s\m\x\x\h\k\x\l\1\d\q\y\4\e\s\r\d\e\0\q\9\a\v\7\i\0\l\w\f\t\p\3\j\h\v\y\d\u\k\x\7\4\i\1\c\y\8\6\j\s\p\7\7\5\s\4\r\j\2\o\3\3\i\x\m\4\1\q\m\t\2\h\l\c\9\0\i\f\b\r\6\k\f\f\o\q\f\a\z\l\y\1\l\k\5\t\m\4\s\7\f\8\6\e\q\v\g\g\f\o\p\t\r\t\d\9\r\3\q\v\k\9\7\e\j\x\x\0\9\r\f\d\b\l\9\z\9\u\m\i\6\g\b\l\2\x\q\u\e\v\x\4\g\l\q\n\d\i\y\3\c\a\x\z\m\f\g\w\b\c\m\7\v\a\e\s\z\k\d\v\l\y\b\a\6\o\l\w\e\7\q\9\z\8\p\u\e\9\k\s\a\3\o\p\o\d\c\6\9\5\p\q\y\g\w\0\6\m\m\v\u\d\3\e\1\z\6\z\r\l\l\l\w\1\7\z\k\j\r\k\r\e\2\7\5\a\3\3\n\q\m\8\r\x\p\8\l\e\v\b\p\b\9\o\7\t\6\z\g\5\z\q\j\2\7\3\l\s\z\0\r\b\u\e\y\l\i\j\0\g\n\c\l\o\y\u\2\q\c\4\x\6\d\q\4\e\5\h\j\a\o\e\q\v\3\e\x\u\o\b\o\f\t\d\w\6\p\4\r\u\7\x\f\w\v\u\u\h\h\3\j\7\1\q\c\z\8\a\h\q\b\5\d\f\t\4\e\6\k\e\4\t\3\1\p\7\0\0\o\6\y\w\g\j\e\v\6\p\m\8\t\t\c\r\d\u\4\e\s\i\i\i\g\3\a\4\c\w\r\x\f\1\l\9\u\y\z\f\2\b\e\2\4\r\r\q\q\f\j\5\o\7\z\b\m\c\l\v\b\i\2\m\b\t\w\y\3\l\u\4\c\d\q\9\h\a\c\c\n\s\u\0\4\f\m\l\9\5\g\i\j\t\q\8\h\l\k\8\z\v\f\f\e\5\u\2\d\1\b\q\4\c\7\1\h\7\s\n\4\5\4\t\y\d\f\d\4\j\1\s\j\5\6\l\m\5\u\s\3\k\1\o\1\7\s\v\p\4\3\6\9\0\5\f\g\w\9\t\p\9\e\9\v\u\y\5\0\j\e\j\2\l\h\5\8\e\y\f\8\4\l\o\f\4\r\6\n\a\v\x\j\i\k\k\7\2\z\g\v\i\f\e\c\b\5\g\l\i\u\x\o\2\s\3\9\u\7\u\c\o\u\1\0\4\g\t\0\r\m\7\n\g\4\e\d\f\w\9\i\1\f\b\r\r\i\u\d\a\i\f\r\l\c\8\s\h\d\9\l\d\5\q\i\h\8\5\e\2\7\v\r\u\6\b\8\g\5\6\d\h\m\h\p\a\k\s\d\w\y\9\q\b\c\g\4\h\t\z\m\x\t\u\1\l\1\t\z\x\h\n\x\r\i\n\v\h\e\6\i\l\3\i\q\h\r\l\i\7\e\q\h\z\b\s\7\i\g\w\t\c\y\d\g\4\t\f\3\m\p\3\z\x\9\i\j\9\q\p\k\y\1\p\l\5\e\c\c\1\j\b\x\3\1\t\d\r\3\2\5\f\8\q\w\c\1\7\w\h\r\f\2\3\7\s\7\f\w\w\f\4\b\i\i\1\f\q\t\5\p\w\h\n\d\u\f\4\g\9\5\m\a\f\b\f\v\p\l\a\m\y\z\4\p\1\6\1\7\c\m\6\7\o\r\c\8\1\s\5\t\8\b\o\7\a\t\h\t\g\q\8\5\1\6\c\k\l\v\4\p\q\h\i\v\0\d\4\l\s\w\r\w\5\t\1\j\c\v\x\m\k\w\f\i\4\z\l\7\8\j\4\j\w\d\u\q\x\1\9\3\f\g\5\l\4\j\n\l\9\f\7\t\0\p\t\8\q\8\b\x\l\e\f\p\x\g\1\p\s\r\p\c\6\9\8\5\3\p\8\z\u\h\g\d\h\9\3\z\6\2\0\z\0\r\9\m\c\h\n\2\e\7\o\4\q\f\9\j\q\e\f\3\v\y\q\x\a\g\7\5\o\z\2\5\8\2\x\4\j\z\o\o\d\i\0\8\r\6\6\w\b\f\h\1\p\e\t\e\k\6\y\k\w\d\h\r\v\8\e\0\c\p\u\5\5\1\2\g\h\e\w\3\o\c\w\p\t\p\z\6\t\8\n\n\u\e\j\8\q\q\l\7\4\y\f\9\s\h\v\9\h\d\r\i\a\0\o\8\u\9\8\v\a\7\g\m\9\f\2\n\4\o\8\4\2\w\e\7\0\9\t\8\c\z\3\g\z\b\r\b\2\v\o\q\p\p\l\w\5\7\3\w\d\n\6\w\m\b\r\7\g\v\e\r\u\y\7\d\p\z\5\f\y\p\e\e\z\e\m\3\2\6\x\r\1\n\f\6\a\t\7\a\0\k\9\4\o\z\a\f\k\6\m\r\h\5\h\n\m\7\u\f\a\y\v\4\u\t\v\1\b\3\b\d\p\8\x\n\r\0\v\s\v\z\9\o\s\b\u\k\6\2\s\s\e\q\y\v\k\4\h\6\7\5\9\r\h\u\v\e\k\q\9\z\v\6\o\6\s\s\c\n\8\m\u\q\l\e\r\1\p\r\9\m\x\5\r\n\j\k\3\n\l\h\t\i\w\m\7\0\v\i\s\l\7\u\r\q\e\4\h\j\9\e\2\r\q\x\p\6\k\m\c\5\b\m\p\z\8\q\2\9\8\1\4\k\m\i\6\1\6\x\e\7\u\h\0\t\w\9\8\3\o\a\i\h\s\0\p\5\t\2\y\w\h\d\t\b\k\f\v\z\5\t\2\n\q\3\7\o\y\d\z\w\a\z\j\3\f\4\e\1\u\i\z\j\c\y\m\b\8\d\2\7\i\9\6\f\6\f\m\g\7\3\b\6\t\r\5\g\1\l\5\2\x\9\4\r\n\l\d\5\5\5\b\5\f\u\v\h\w\f\m\t\d\m\w\4\q\m\l\v\0\t\v\3\o\2\3\s\7\d\k\h\l\g\q\6\b\k\j\s\6\x\x\0\d\4\4\z\2\r\m\k\u\o\c\y\6\m\3\6\m\n\9\a\h\b\w\a\7\j\t\t\c\i\0\q\9\0\v\l\j\t\8\8\y\v\k\y\4\3\f\v\r\8\e\b\0\y\5\a\q\q\0\g\n\n\r\8\j\4\k\w\n\s\o\k\q\a\p\z\v\7\5\e\l\m\2\a\f\i\8\4\6\r\f\c\a\0\b\7\g\q\w\t\d\v\0\l\u\4\a\0\q\d\x\v\8\9\r\4\j\3\y\h\5\f\1\r\4\m\w\6\t\3\v\n\l\8\d\f\d\3\f\l\7\6\w\o\8\u\b\j\o\k\z\9\i\b\4\q\9\g\7\e\h\9\o\q\t\t\e\s\p\3\0\b\3\r\w\9\2\7\b\a\x\6\r\c\g\5\t\o\j\2\7\d\r\n\l\5\t\w\j\s\0\k\e\j\z\h\u\l\b\s\z\0\e\g\u\u\l\0\9\u\k\w\2\1\0\y\f\e\o\m\e\y\2\j\9\6\j\4\v\l\k\b\a\c\r\f\o\4\n\h\h\4\g\d\i\c\2\e\8\k\n\a\k\h\5\e\4\u\j\7\r\i\n\9\z\v\q\4\f\3\t\s\g\b\h\4\b\i\6\4\e\l\v\0\n\x\e\1\g\1\z\m\m\j\r\a\u\6\j\m\h\a\j\i\p\o\s\g\0\f\g\2\l\z\f\5\y\0\4\e\4\7\c\f\6\4\d\k\h\f\o\s\u\l\a\s\5\c\7\0\0\n\p\p\k\0\j\6\k\x\6\s\6\r\5\x\k\u\x\h\4\k\2\r\h\h\2\j\q\5\8\a\o\s\9\3\e\e\5\a\n\l\9\i\p\0\o\z\6\g\x\y\4\1\e\g\o\x\5\0\n\8\v\t\k\v\k\r\w\v\t\z\b\v\r\h\0\o\f\0\d\a\q\q\c\t\u\v\r\9\p\7\x\i\6\8\m\u\f\6\2\x\h\y\b\b\c\2\8\x\5\w\w\f\p\6\a\w\e\a\f\m\k\u\q\t\u\5\0\f\o\0\g\8\i\h\k\2\p\w\s\2\z\s\5\i\g\m\x\f\5\o\e\8\t\2\j\t\r\a\b\h\e\b\z\5\1\p\9\4\2\j\p\h\j\o\i\v\k\8\m\8\e\g\4\2\p\v\f\t\q\i\i\n\t\x\x\r\n\0\3\g\y\1\y\k\7\q\o\q\y\9\i\w\h\f\9\d\j\l\l\a\v\g\n\8\0\0\a\9\f\g\4\u\o\1\0\i\3\1\c\l\5\1\z\6\w\h\q\9\u\r\f\e\q\s\h\6\q\w\6\7\6\p\t\i\j\2\u\c\k\6\9\z\t\q\k\l\m\h\f\t\p\l\a\h\6\6\p\w\6\4\c\o\j\l\z\z\o\m\7\9\j\9\q\a\z\7\x\d\x\f\5\f\2\5\7\n\h\w\b\s\i\k\2\0\o\h\j\u\8\0\2\2\4\8\m\q\4\p\h\0\z\2\6\5\v\g\m\4\a\l\9\2\g\t\o\w\h\x\8\d\6\5\o\a\5\0\c\a\y\u\t\n\p\k\u\2\u\g\o\3\b\l\0\p\g\8\j\p\0\7\n\s\k\2\d\z\2\s\c\2\6\i\3\c\b\w\8\r\r\n\7\s\6\6\e\p\s\n\n\i\k\l\b\2\m\h\2\t\s\x\i\0\q\u\p\u\z\9\a\d\h\e\x\t\d\7\2\p\a\i\t\e\j\e\k\i\q\1\i\h\h\d\w\w\w\k\4\9\z\5\7\3\o\c\a\0\y\i\8\y\5\e\d\w\t\f\p\d\7\o\n\k\m\w\q\y\d\a\w\d\0\f\e\m\w\6\1\d\a\w\f\0\p\c\p\z\0\d\y\n\j\p\7\n\j\g\6\i\5\r\h\r\3\t\8\j\a\q\w\r\g\p\6\q\d\t\m\l\r\4\x\0\a\s\7\t\6\x\8\i\k\l\5\7\h\l\u\k\b\9\6\e\0\c\5\2\t\l\2\2\0\b\v\6\o\u\l\h\3\d\j\j\2\w\b\a\2\b\q\s\6\i\y\p\0\r\9\m\i\p\8\2\r\e\o\s\7\4\c\1\8\9\w\w\2\m\m\7\2\z\x\9\k\r\d\u\s\q\b\f\t\y\m\5\3\y\7\b\b\m\8\u\f\u\m\n\t\i\q\z\d\f\i\l\p\4\d\a\6\b\d\q\8\5\x\m\y\e\j\a\j\v\c\m\z\v\t\7\n\i\9\p\u\3\w\z\f\i\c\0\e\4\t\w\8\a\s\x\c\2\b\q\5\9\h\y\g\h\3\6\m\9\3\l\1\o\g\i\n\h\4\6\5\f\m\t\u\i\a\y\q\m\w\7\j\5\9\i\b\5\2\s\9\0\4\d\9\g\o\1\2\r\x\5\c\b\w\a\n\x\v\e\y\7\c\n\q\5\e\7\3\e\r\1\2\k\2\k\6\d\z\u\f\x\g\c\9\x\f\3\q\d\5\9\w\3\d\p\c\8\n\u\y\4\n\s\i\o\4\x\v\m\l\9\1\e\h\q\r\a\1\d\n\f\a\h\o\s\i\5\r\u\9\n\u\1\w\b\m\v\o\c\3\u\j\6\4\n\5\a\2\l\y\y\f\c\z\m\6\y\4\h\w\c\v\k\1\e\j\3\d\4\w\u\6\k\r\u\r\h\k\t\p\w\b\8\3\v\h\h\c\2\j\1\q\x\l\o\h\a\8\z\m\5\b\k\2\p\4\9\g\4\7\k\m\l\5\9\o\e\6\q\l\i\5\o\3\h\i\t\x\2\8\5\f\q\m\w\u\o\l\y\y\m\4\0\d\n\u\s\j\4\w\3\k\v\6\o\7\z\p\d\q\7\x\k\r\s\l\d\s\m\0\7\t\u\r\k\7\f\b\s\6\u\2\j\w\d\7\m\n\r\3\o\b\5\j\5\x\k\e\d\q\3\n\2\y\a\h\g\5\n\7\7\l\3\3\2\5\v\h\a\v\x\u\s\7\u\o\n\j\i\k\a\3\f\k\b\8\8\r\z\z\q\s\o\k\4\s\0\d\9\h\9\d\0\b\x\t\a\w\3\n\v\b\5\l\f\1\0\4\h\7\0\s\c\y\g\4\g\i\6\w\w\b\4\8\q\n\s\w\0\b\i\r\3\v\n\n\f\n\k\z\s\y\g\k\2\r\p\l\v\l\8\a\r\1\3\d\l\3\k\r\m\1\3\v\7\i\m\x\n\5\m\5\e\o\b\q\u\8\5\e\c\m\j\o\8\r\e\y\8\o\c\2\h\7\w\o\h\8\r\3\u\o\u\g\4\u\6\c\y\r\d\3\m\e\n\c\6\5\2\r\2\l\9\d\j\4\n\l\l\b\d\8\q\0\i\8\f\7\8\c\x\5\2\j\j\3\1\0\6\w\n\j\m\9\h\5\d\a\e\b\1\x\m\o\a\3\v\d\m\0\1\z\h\p\6\y\l\f\1\v\m\5\1\5\q\e\g\x\2\v\j\0\x\s\d\t\d\u\1\2\v\h\0\3\g\v\3\l\2\b\6\i\3\v\h\v\0\c\v\f\b\c\r\l\2\n\k\b\n\v\h\x\3\7\n\f\e\z\v\9\0\9\6\x\u\c\k\a\o\g\3\l\g\o\y\g\l\8\t\o\z\4\k\o\k\n\u\e\8\i\g\1\8\5\f\l\v\j\i\w\y\j\5\x\r\1\b\d\k\x\n\k\k\r\j\b\n\n\q\3\z\l\1\w\j\4\9\x\r\s\o\8\v\a\o\u\b\j\h\c\z\j\s\i\5\f\6\2\r\v\f\e\c\m\x\m\w\g\h\2\r\2\u\6\8\q\u\7\2\8\2\5\k\x\3\f\r\t\5\j\q\f\6\1\y\w\4\n\m\b\a\3\1\g\6\j\f\k\j\s\m\7\b\o\j\v\6\0\0\z\z\v\o\m\h\y\y\b\4\b\c\v\d\4\7\l\i\i\l\o\7\3\c\h\3\j\j\v\g\g\4\s\c\6\l\n\h\3\c\r\y\c\q\u\b\8\x\w\y\j\m\w\o\v\v\h\p\b\c\d\w\y\a\d\5\p\w\a\t\d\t\x\s\h\7\c\t\l\2\l\a\o\0\z\9\y\j\m\t\5\1\y\7\5\3\9\u\2\x\s\p\9\f\x\m\k\v\3\o\a\h\s\o\3\p\0\l\s\0\e\v\z\5\u\w\7\g\3\z\5\a\t\6\s\4\k\e\l\q\p\3\o\e\1\c\r\i\9\x\x\t\n\m\z\w\c\h\q\g\x\l\l\c\1\b\p\j\z\h\q\6\y\f\j\x\8\w\t\o\o\4\h\j\9\j\x\t\h\u\u\u\8\m\2\3\i\9\m\9\f\a\g\s\4\7\r\t\u\v\6\x\r\r\w\0\e\h\s\y\7\n\g\s\4\2\c\6\1\r\1\6\f\o\4\e\n\8\r\w\7\u\g\u\g\3\4\y\6\i\s\m\0\1\h\s\b\e\s\j\m\6\t\e\6\z\4\s\n\s\i\s\d\l\w\3\f\3\v\u\y\z\n\3\o\x\f\z\1\p\y\f\6\5\1\u\9\v\5\u\j\k\y\p\p\6\q\m\f\7\g\8\y\2\8\7\g\n\6\7\p\c\u\n\u\b\a\v\k\y\x\8\m\v\n\1\s\e\1\c\i\5\6\w\6\m\7\z\9\w\7\u\o\2\q\l\4\3\d\f\q\5\0\7\e\8\v\a\f\5\y\x\g\w\t\5\j\5\9\v\c\x\6\9\i\m\2\4\q\q\w\i\e\7\g\z\v\q\f\8\2\p\u ]] 00:07:26.455 00:07:26.455 real 0m0.986s 00:07:26.455 user 0m0.675s 00:07:26.455 sys 0m0.199s 00:07:26.455 05:47:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:26.455 05:47:47 -- common/autotest_common.sh@10 -- # set +x 00:07:26.455 05:47:47 -- dd/basic_rw.sh@1 -- # cleanup 00:07:26.455 05:47:47 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:26.455 05:47:47 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:26.455 05:47:47 -- dd/common.sh@11 -- # local nvme_ref= 00:07:26.455 05:47:47 -- dd/common.sh@12 -- # local size=0xffff 00:07:26.455 05:47:47 -- dd/common.sh@14 -- # local bs=1048576 00:07:26.455 05:47:47 -- dd/common.sh@15 -- # local count=1 00:07:26.455 05:47:47 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:26.455 05:47:47 -- dd/common.sh@18 -- # gen_conf 00:07:26.455 05:47:47 -- dd/common.sh@31 -- # xtrace_disable 00:07:26.455 05:47:47 -- common/autotest_common.sh@10 -- # set +x 00:07:26.455 [2024-12-15 05:47:47.970782] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:26.455 [2024-12-15 05:47:47.971403] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69749 ] 00:07:26.455 { 00:07:26.455 "subsystems": [ 00:07:26.455 { 00:07:26.455 "subsystem": "bdev", 00:07:26.455 "config": [ 00:07:26.455 { 00:07:26.455 "params": { 00:07:26.455 "trtype": "pcie", 00:07:26.455 "traddr": "0000:00:06.0", 00:07:26.455 "name": "Nvme0" 00:07:26.455 }, 00:07:26.455 "method": "bdev_nvme_attach_controller" 00:07:26.455 }, 00:07:26.455 { 00:07:26.455 "method": "bdev_wait_for_examine" 00:07:26.455 } 00:07:26.455 ] 00:07:26.455 } 00:07:26.455 ] 00:07:26.455 } 00:07:26.714 [2024-12-15 05:47:48.106708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.714 [2024-12-15 05:47:48.138944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.714  [2024-12-15T05:47:48.614Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:26.973 00:07:26.973 05:47:48 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:26.973 ************************************ 00:07:26.973 END TEST spdk_dd_basic_rw 00:07:26.973 ************************************ 00:07:26.973 00:07:26.973 real 0m13.872s 00:07:26.973 user 0m9.766s 00:07:26.973 sys 0m2.738s 00:07:26.973 05:47:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:26.973 05:47:48 -- common/autotest_common.sh@10 -- # set +x 00:07:26.973 05:47:48 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:26.973 05:47:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:26.973 05:47:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:26.973 05:47:48 -- common/autotest_common.sh@10 -- # set +x 00:07:26.973 ************************************ 00:07:26.973 START TEST spdk_dd_posix 00:07:26.973 ************************************ 00:07:26.973 05:47:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:26.973 * Looking for test storage... 00:07:26.973 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:26.973 05:47:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:26.973 05:47:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:26.973 05:47:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:27.233 05:47:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:27.233 05:47:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:27.233 05:47:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:27.233 05:47:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:27.233 05:47:48 -- scripts/common.sh@335 -- # IFS=.-: 00:07:27.233 05:47:48 -- scripts/common.sh@335 -- # read -ra ver1 00:07:27.233 05:47:48 -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.233 05:47:48 -- scripts/common.sh@336 -- # read -ra ver2 00:07:27.233 05:47:48 -- scripts/common.sh@337 -- # local 'op=<' 00:07:27.233 05:47:48 -- scripts/common.sh@339 -- # ver1_l=2 00:07:27.233 05:47:48 -- scripts/common.sh@340 -- # ver2_l=1 00:07:27.233 05:47:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:27.233 05:47:48 -- scripts/common.sh@343 -- # case "$op" in 00:07:27.233 05:47:48 -- scripts/common.sh@344 -- # : 1 00:07:27.233 05:47:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:27.233 05:47:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.233 05:47:48 -- scripts/common.sh@364 -- # decimal 1 00:07:27.233 05:47:48 -- scripts/common.sh@352 -- # local d=1 00:07:27.233 05:47:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.233 05:47:48 -- scripts/common.sh@354 -- # echo 1 00:07:27.233 05:47:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:27.233 05:47:48 -- scripts/common.sh@365 -- # decimal 2 00:07:27.233 05:47:48 -- scripts/common.sh@352 -- # local d=2 00:07:27.233 05:47:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.233 05:47:48 -- scripts/common.sh@354 -- # echo 2 00:07:27.233 05:47:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:27.233 05:47:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:27.233 05:47:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:27.233 05:47:48 -- scripts/common.sh@367 -- # return 0 00:07:27.233 05:47:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.233 05:47:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:27.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.233 --rc genhtml_branch_coverage=1 00:07:27.233 --rc genhtml_function_coverage=1 00:07:27.233 --rc genhtml_legend=1 00:07:27.233 --rc geninfo_all_blocks=1 00:07:27.233 --rc geninfo_unexecuted_blocks=1 00:07:27.233 00:07:27.233 ' 00:07:27.233 05:47:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:27.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.233 --rc genhtml_branch_coverage=1 00:07:27.233 --rc genhtml_function_coverage=1 00:07:27.233 --rc genhtml_legend=1 00:07:27.233 --rc geninfo_all_blocks=1 00:07:27.233 --rc geninfo_unexecuted_blocks=1 00:07:27.233 00:07:27.233 ' 00:07:27.233 05:47:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:27.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.233 --rc genhtml_branch_coverage=1 00:07:27.233 --rc genhtml_function_coverage=1 00:07:27.233 --rc genhtml_legend=1 00:07:27.233 --rc geninfo_all_blocks=1 00:07:27.233 --rc geninfo_unexecuted_blocks=1 00:07:27.233 00:07:27.233 ' 00:07:27.233 05:47:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:27.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.233 --rc genhtml_branch_coverage=1 00:07:27.233 --rc genhtml_function_coverage=1 00:07:27.233 --rc genhtml_legend=1 00:07:27.233 --rc geninfo_all_blocks=1 00:07:27.233 --rc geninfo_unexecuted_blocks=1 00:07:27.233 00:07:27.233 ' 00:07:27.233 05:47:48 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:27.233 05:47:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.233 05:47:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.233 05:47:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.233 05:47:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.233 05:47:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.233 05:47:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.233 05:47:48 -- paths/export.sh@5 -- # export PATH 00:07:27.233 05:47:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.233 05:47:48 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:27.233 05:47:48 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:27.233 05:47:48 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:27.233 05:47:48 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:27.233 05:47:48 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:27.233 05:47:48 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:27.233 05:47:48 -- dd/posix.sh@130 -- # tests 00:07:27.233 05:47:48 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:27.233 * First test run, liburing in use 00:07:27.233 05:47:48 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:27.233 05:47:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:27.233 05:47:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.233 05:47:48 -- common/autotest_common.sh@10 -- # set +x 00:07:27.233 ************************************ 00:07:27.233 START TEST dd_flag_append 00:07:27.233 ************************************ 00:07:27.233 05:47:48 -- common/autotest_common.sh@1114 -- # append 00:07:27.233 05:47:48 -- dd/posix.sh@16 -- # local dump0 00:07:27.233 05:47:48 -- dd/posix.sh@17 -- # local dump1 00:07:27.233 05:47:48 -- dd/posix.sh@19 -- # gen_bytes 32 00:07:27.233 05:47:48 -- dd/common.sh@98 -- # xtrace_disable 00:07:27.233 05:47:48 -- common/autotest_common.sh@10 -- # set +x 00:07:27.233 05:47:48 -- dd/posix.sh@19 -- # dump0=vymtgklshpiee1l265aume8my1e421r9 00:07:27.233 05:47:48 -- dd/posix.sh@20 -- # gen_bytes 32 00:07:27.233 05:47:48 -- dd/common.sh@98 -- # xtrace_disable 00:07:27.233 05:47:48 -- common/autotest_common.sh@10 -- # set +x 00:07:27.233 05:47:48 -- dd/posix.sh@20 -- # dump1=7be1lh89ywzf9g9lobpsvljozmiq237f 00:07:27.233 05:47:48 -- dd/posix.sh@22 -- # printf %s vymtgklshpiee1l265aume8my1e421r9 00:07:27.233 05:47:48 -- dd/posix.sh@23 -- # printf %s 7be1lh89ywzf9g9lobpsvljozmiq237f 00:07:27.233 05:47:48 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:27.233 [2024-12-15 05:47:48.705683] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:27.233 [2024-12-15 05:47:48.705768] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69819 ] 00:07:27.233 [2024-12-15 05:47:48.835996] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.233 [2024-12-15 05:47:48.867268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.493  [2024-12-15T05:47:49.134Z] Copying: 32/32 [B] (average 31 kBps) 00:07:27.493 00:07:27.493 05:47:49 -- dd/posix.sh@27 -- # [[ 7be1lh89ywzf9g9lobpsvljozmiq237fvymtgklshpiee1l265aume8my1e421r9 == \7\b\e\1\l\h\8\9\y\w\z\f\9\g\9\l\o\b\p\s\v\l\j\o\z\m\i\q\2\3\7\f\v\y\m\t\g\k\l\s\h\p\i\e\e\1\l\2\6\5\a\u\m\e\8\m\y\1\e\4\2\1\r\9 ]] 00:07:27.493 00:07:27.493 real 0m0.392s 00:07:27.493 user 0m0.183s 00:07:27.493 sys 0m0.091s 00:07:27.493 05:47:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:27.493 ************************************ 00:07:27.493 END TEST dd_flag_append 00:07:27.493 ************************************ 00:07:27.493 05:47:49 -- common/autotest_common.sh@10 -- # set +x 00:07:27.493 05:47:49 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:27.493 05:47:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:27.493 05:47:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.493 05:47:49 -- common/autotest_common.sh@10 -- # set +x 00:07:27.493 ************************************ 00:07:27.493 START TEST dd_flag_directory 00:07:27.493 ************************************ 00:07:27.493 05:47:49 -- common/autotest_common.sh@1114 -- # directory 00:07:27.493 05:47:49 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:27.493 05:47:49 -- common/autotest_common.sh@650 -- # local es=0 00:07:27.493 05:47:49 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:27.493 05:47:49 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.493 05:47:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.493 05:47:49 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.493 05:47:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.493 05:47:49 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.493 05:47:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.493 05:47:49 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.493 05:47:49 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:27.493 05:47:49 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:27.752 [2024-12-15 05:47:49.157222] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:27.752 [2024-12-15 05:47:49.157327] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69840 ] 00:07:27.752 [2024-12-15 05:47:49.292196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.752 [2024-12-15 05:47:49.324688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.752 [2024-12-15 05:47:49.365883] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:27.752 [2024-12-15 05:47:49.365949] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:27.752 [2024-12-15 05:47:49.365977] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:28.011 [2024-12-15 05:47:49.422539] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:28.011 05:47:49 -- common/autotest_common.sh@653 -- # es=236 00:07:28.011 05:47:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:28.011 05:47:49 -- common/autotest_common.sh@662 -- # es=108 00:07:28.011 05:47:49 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:28.011 05:47:49 -- common/autotest_common.sh@670 -- # es=1 00:07:28.011 05:47:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:28.011 05:47:49 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:28.011 05:47:49 -- common/autotest_common.sh@650 -- # local es=0 00:07:28.011 05:47:49 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:28.011 05:47:49 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.011 05:47:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.011 05:47:49 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.011 05:47:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.011 05:47:49 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.011 05:47:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.011 05:47:49 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.011 05:47:49 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:28.011 05:47:49 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:28.011 [2024-12-15 05:47:49.534345] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:28.011 [2024-12-15 05:47:49.534447] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69855 ] 00:07:28.270 [2024-12-15 05:47:49.669568] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.270 [2024-12-15 05:47:49.700782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.270 [2024-12-15 05:47:49.742381] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:28.270 [2024-12-15 05:47:49.742445] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:28.270 [2024-12-15 05:47:49.742473] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:28.270 [2024-12-15 05:47:49.798703] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:28.270 05:47:49 -- common/autotest_common.sh@653 -- # es=236 00:07:28.270 05:47:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:28.270 05:47:49 -- common/autotest_common.sh@662 -- # es=108 00:07:28.270 05:47:49 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:28.270 05:47:49 -- common/autotest_common.sh@670 -- # es=1 00:07:28.270 05:47:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:28.270 00:07:28.270 real 0m0.766s 00:07:28.270 user 0m0.376s 00:07:28.270 sys 0m0.181s 00:07:28.270 05:47:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:28.270 05:47:49 -- common/autotest_common.sh@10 -- # set +x 00:07:28.270 ************************************ 00:07:28.270 END TEST dd_flag_directory 00:07:28.270 ************************************ 00:07:28.529 05:47:49 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:28.529 05:47:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:28.529 05:47:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.529 05:47:49 -- common/autotest_common.sh@10 -- # set +x 00:07:28.529 ************************************ 00:07:28.529 START TEST dd_flag_nofollow 00:07:28.529 ************************************ 00:07:28.529 05:47:49 -- common/autotest_common.sh@1114 -- # nofollow 00:07:28.529 05:47:49 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:28.529 05:47:49 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:28.529 05:47:49 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:28.529 05:47:49 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:28.529 05:47:49 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:28.529 05:47:49 -- common/autotest_common.sh@650 -- # local es=0 00:07:28.529 05:47:49 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:28.529 05:47:49 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.529 05:47:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.529 05:47:49 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.529 05:47:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.529 05:47:49 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.529 05:47:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.529 05:47:49 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.529 05:47:49 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:28.529 05:47:49 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:28.530 [2024-12-15 05:47:49.982495] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:28.530 [2024-12-15 05:47:49.982594] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69878 ] 00:07:28.530 [2024-12-15 05:47:50.120796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.530 [2024-12-15 05:47:50.155144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.789 [2024-12-15 05:47:50.197667] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:28.789 [2024-12-15 05:47:50.197730] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:28.789 [2024-12-15 05:47:50.197741] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:28.789 [2024-12-15 05:47:50.254709] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:28.789 05:47:50 -- common/autotest_common.sh@653 -- # es=216 00:07:28.789 05:47:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:28.789 05:47:50 -- common/autotest_common.sh@662 -- # es=88 00:07:28.789 05:47:50 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:28.789 05:47:50 -- common/autotest_common.sh@670 -- # es=1 00:07:28.789 05:47:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:28.789 05:47:50 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:28.789 05:47:50 -- common/autotest_common.sh@650 -- # local es=0 00:07:28.789 05:47:50 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:28.789 05:47:50 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.789 05:47:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.789 05:47:50 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.789 05:47:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.789 05:47:50 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.789 05:47:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.789 05:47:50 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.789 05:47:50 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:28.789 05:47:50 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:28.789 [2024-12-15 05:47:50.370229] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:28.789 [2024-12-15 05:47:50.370336] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69893 ] 00:07:29.047 [2024-12-15 05:47:50.506689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.048 [2024-12-15 05:47:50.537242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.048 [2024-12-15 05:47:50.578221] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:29.048 [2024-12-15 05:47:50.578288] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:29.048 [2024-12-15 05:47:50.578317] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.048 [2024-12-15 05:47:50.633968] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:29.306 05:47:50 -- common/autotest_common.sh@653 -- # es=216 00:07:29.306 05:47:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:29.306 05:47:50 -- common/autotest_common.sh@662 -- # es=88 00:07:29.306 05:47:50 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:29.306 05:47:50 -- common/autotest_common.sh@670 -- # es=1 00:07:29.306 05:47:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:29.306 05:47:50 -- dd/posix.sh@46 -- # gen_bytes 512 00:07:29.306 05:47:50 -- dd/common.sh@98 -- # xtrace_disable 00:07:29.306 05:47:50 -- common/autotest_common.sh@10 -- # set +x 00:07:29.306 05:47:50 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:29.306 [2024-12-15 05:47:50.752214] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:29.306 [2024-12-15 05:47:50.752320] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69895 ] 00:07:29.306 [2024-12-15 05:47:50.887506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.306 [2024-12-15 05:47:50.918038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.566  [2024-12-15T05:47:51.207Z] Copying: 512/512 [B] (average 500 kBps) 00:07:29.566 00:07:29.566 05:47:51 -- dd/posix.sh@49 -- # [[ 2fb1vbcey77a5a1935u1iq7nvetq50yx81u1xjd340j8lqzhshscpnlv0ng188z60ecjypymggpb33nyz5qsu2ncn9pmyhp80ug7z1s8224jufcq48tu65jez90w4nihmuan2t1ckactfy6fog7gh84fd7dn24j63grqwb59rwztrvv2xhh3uhrlq10x2gwh4h0pewzpgilzdodpqk559lvecplxyul7nqrxx83oezqze76v4deorw58cgtqdy10n9ric3us0zrxl9tox9ufiwttzkcrfygcrjcpfugukrrmrssejan8scwfbeah22munqm1x6m2g4p8nr258a037ol9g985hrz1sliwu7pfktzapf6javgnfrzg1hijml50b74zbviz0bzo9uqf33ejilkr2iwg046k63be845855tljdzvpxtpiqqqnnuv80qcybt7g9an2gntwep22ppfa1u1t25td9e8kk7wwfpq1bxd6rm5g5n3nzpycfy4avno == \2\f\b\1\v\b\c\e\y\7\7\a\5\a\1\9\3\5\u\1\i\q\7\n\v\e\t\q\5\0\y\x\8\1\u\1\x\j\d\3\4\0\j\8\l\q\z\h\s\h\s\c\p\n\l\v\0\n\g\1\8\8\z\6\0\e\c\j\y\p\y\m\g\g\p\b\3\3\n\y\z\5\q\s\u\2\n\c\n\9\p\m\y\h\p\8\0\u\g\7\z\1\s\8\2\2\4\j\u\f\c\q\4\8\t\u\6\5\j\e\z\9\0\w\4\n\i\h\m\u\a\n\2\t\1\c\k\a\c\t\f\y\6\f\o\g\7\g\h\8\4\f\d\7\d\n\2\4\j\6\3\g\r\q\w\b\5\9\r\w\z\t\r\v\v\2\x\h\h\3\u\h\r\l\q\1\0\x\2\g\w\h\4\h\0\p\e\w\z\p\g\i\l\z\d\o\d\p\q\k\5\5\9\l\v\e\c\p\l\x\y\u\l\7\n\q\r\x\x\8\3\o\e\z\q\z\e\7\6\v\4\d\e\o\r\w\5\8\c\g\t\q\d\y\1\0\n\9\r\i\c\3\u\s\0\z\r\x\l\9\t\o\x\9\u\f\i\w\t\t\z\k\c\r\f\y\g\c\r\j\c\p\f\u\g\u\k\r\r\m\r\s\s\e\j\a\n\8\s\c\w\f\b\e\a\h\2\2\m\u\n\q\m\1\x\6\m\2\g\4\p\8\n\r\2\5\8\a\0\3\7\o\l\9\g\9\8\5\h\r\z\1\s\l\i\w\u\7\p\f\k\t\z\a\p\f\6\j\a\v\g\n\f\r\z\g\1\h\i\j\m\l\5\0\b\7\4\z\b\v\i\z\0\b\z\o\9\u\q\f\3\3\e\j\i\l\k\r\2\i\w\g\0\4\6\k\6\3\b\e\8\4\5\8\5\5\t\l\j\d\z\v\p\x\t\p\i\q\q\q\n\n\u\v\8\0\q\c\y\b\t\7\g\9\a\n\2\g\n\t\w\e\p\2\2\p\p\f\a\1\u\1\t\2\5\t\d\9\e\8\k\k\7\w\w\f\p\q\1\b\x\d\6\r\m\5\g\5\n\3\n\z\p\y\c\f\y\4\a\v\n\o ]] 00:07:29.566 00:07:29.566 real 0m1.182s 00:07:29.566 user 0m0.570s 00:07:29.566 sys 0m0.283s 00:07:29.566 05:47:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:29.566 05:47:51 -- common/autotest_common.sh@10 -- # set +x 00:07:29.566 ************************************ 00:07:29.566 END TEST dd_flag_nofollow 00:07:29.566 ************************************ 00:07:29.566 05:47:51 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:29.566 05:47:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:29.566 05:47:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.566 05:47:51 -- common/autotest_common.sh@10 -- # set +x 00:07:29.566 ************************************ 00:07:29.566 START TEST dd_flag_noatime 00:07:29.566 ************************************ 00:07:29.566 05:47:51 -- common/autotest_common.sh@1114 -- # noatime 00:07:29.566 05:47:51 -- dd/posix.sh@53 -- # local atime_if 00:07:29.566 05:47:51 -- dd/posix.sh@54 -- # local atime_of 00:07:29.566 05:47:51 -- dd/posix.sh@58 -- # gen_bytes 512 00:07:29.566 05:47:51 -- dd/common.sh@98 -- # xtrace_disable 00:07:29.566 05:47:51 -- common/autotest_common.sh@10 -- # set +x 00:07:29.566 05:47:51 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:29.566 05:47:51 -- dd/posix.sh@60 -- # atime_if=1734241670 00:07:29.566 05:47:51 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:29.566 05:47:51 -- dd/posix.sh@61 -- # atime_of=1734241671 00:07:29.566 05:47:51 -- dd/posix.sh@66 -- # sleep 1 00:07:30.943 05:47:52 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:30.943 [2024-12-15 05:47:52.229217] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:30.943 [2024-12-15 05:47:52.229341] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69930 ] 00:07:30.943 [2024-12-15 05:47:52.366748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.943 [2024-12-15 05:47:52.407412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.943  [2024-12-15T05:47:52.843Z] Copying: 512/512 [B] (average 500 kBps) 00:07:31.202 00:07:31.202 05:47:52 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:31.202 05:47:52 -- dd/posix.sh@69 -- # (( atime_if == 1734241670 )) 00:07:31.202 05:47:52 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:31.202 05:47:52 -- dd/posix.sh@70 -- # (( atime_of == 1734241671 )) 00:07:31.202 05:47:52 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:31.202 [2024-12-15 05:47:52.674767] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:31.202 [2024-12-15 05:47:52.675376] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69947 ] 00:07:31.202 [2024-12-15 05:47:52.811512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.461 [2024-12-15 05:47:52.842866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.461  [2024-12-15T05:47:53.102Z] Copying: 512/512 [B] (average 500 kBps) 00:07:31.461 00:07:31.461 05:47:53 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:31.461 05:47:53 -- dd/posix.sh@73 -- # (( atime_if < 1734241672 )) 00:07:31.461 00:07:31.461 real 0m1.875s 00:07:31.461 user 0m0.429s 00:07:31.461 sys 0m0.209s 00:07:31.461 05:47:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:31.461 ************************************ 00:07:31.461 END TEST dd_flag_noatime 00:07:31.461 ************************************ 00:07:31.461 05:47:53 -- common/autotest_common.sh@10 -- # set +x 00:07:31.461 05:47:53 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:31.461 05:47:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:31.461 05:47:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.461 05:47:53 -- common/autotest_common.sh@10 -- # set +x 00:07:31.461 ************************************ 00:07:31.461 START TEST dd_flags_misc 00:07:31.461 ************************************ 00:07:31.461 05:47:53 -- common/autotest_common.sh@1114 -- # io 00:07:31.461 05:47:53 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:31.461 05:47:53 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:31.461 05:47:53 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:31.461 05:47:53 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:31.461 05:47:53 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:31.461 05:47:53 -- dd/common.sh@98 -- # xtrace_disable 00:07:31.461 05:47:53 -- common/autotest_common.sh@10 -- # set +x 00:07:31.461 05:47:53 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:31.461 05:47:53 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:31.720 [2024-12-15 05:47:53.145584] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:31.720 [2024-12-15 05:47:53.145682] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69968 ] 00:07:31.720 [2024-12-15 05:47:53.280277] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.720 [2024-12-15 05:47:53.314748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.720  [2024-12-15T05:47:53.621Z] Copying: 512/512 [B] (average 500 kBps) 00:07:31.980 00:07:31.980 05:47:53 -- dd/posix.sh@93 -- # [[ 8bq2sa7w33xreormpujknl44qt8azgexpmj3eyg5nlg41egvarmjiec7x9yw3z8sf6jxhoyoho5lhbohnyj0ey6qdhqv9se1k5bbj9ezl417i8pwcll3j0wga7creppl47ml29brxchv00h50k8df2ja07l4hut1drpjcy95yko89240g9uzfugb8wh84me1krts16a1scgbcjnv3ijc44yx47d1coy8bkvq4x0mliudwdg4l085mi781khihc3q9mjknheah5hcok62vxin5duz7e8tkjrn2mzd2g50ld9zlkxc89ykd0io9e9um1vtbuaj33uvfuqiigd1l15t5nqjfad00c0wcip2dq1lmco07i33khm51ta2j69sz4zu4i1wyt713lrpxejp0ij3ywiu9gu3oqe1r23w8eeujsow1v9g66jqsos38o3ay4o48p3my1ubi1g48kvaaeqkh9tn961i95ac704a20k2vkabvi37gpcf7mybruqtm0gm == \8\b\q\2\s\a\7\w\3\3\x\r\e\o\r\m\p\u\j\k\n\l\4\4\q\t\8\a\z\g\e\x\p\m\j\3\e\y\g\5\n\l\g\4\1\e\g\v\a\r\m\j\i\e\c\7\x\9\y\w\3\z\8\s\f\6\j\x\h\o\y\o\h\o\5\l\h\b\o\h\n\y\j\0\e\y\6\q\d\h\q\v\9\s\e\1\k\5\b\b\j\9\e\z\l\4\1\7\i\8\p\w\c\l\l\3\j\0\w\g\a\7\c\r\e\p\p\l\4\7\m\l\2\9\b\r\x\c\h\v\0\0\h\5\0\k\8\d\f\2\j\a\0\7\l\4\h\u\t\1\d\r\p\j\c\y\9\5\y\k\o\8\9\2\4\0\g\9\u\z\f\u\g\b\8\w\h\8\4\m\e\1\k\r\t\s\1\6\a\1\s\c\g\b\c\j\n\v\3\i\j\c\4\4\y\x\4\7\d\1\c\o\y\8\b\k\v\q\4\x\0\m\l\i\u\d\w\d\g\4\l\0\8\5\m\i\7\8\1\k\h\i\h\c\3\q\9\m\j\k\n\h\e\a\h\5\h\c\o\k\6\2\v\x\i\n\5\d\u\z\7\e\8\t\k\j\r\n\2\m\z\d\2\g\5\0\l\d\9\z\l\k\x\c\8\9\y\k\d\0\i\o\9\e\9\u\m\1\v\t\b\u\a\j\3\3\u\v\f\u\q\i\i\g\d\1\l\1\5\t\5\n\q\j\f\a\d\0\0\c\0\w\c\i\p\2\d\q\1\l\m\c\o\0\7\i\3\3\k\h\m\5\1\t\a\2\j\6\9\s\z\4\z\u\4\i\1\w\y\t\7\1\3\l\r\p\x\e\j\p\0\i\j\3\y\w\i\u\9\g\u\3\o\q\e\1\r\2\3\w\8\e\e\u\j\s\o\w\1\v\9\g\6\6\j\q\s\o\s\3\8\o\3\a\y\4\o\4\8\p\3\m\y\1\u\b\i\1\g\4\8\k\v\a\a\e\q\k\h\9\t\n\9\6\1\i\9\5\a\c\7\0\4\a\2\0\k\2\v\k\a\b\v\i\3\7\g\p\c\f\7\m\y\b\r\u\q\t\m\0\g\m ]] 00:07:31.980 05:47:53 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:31.980 05:47:53 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:31.980 [2024-12-15 05:47:53.549780] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:31.980 [2024-12-15 05:47:53.549910] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69981 ] 00:07:32.239 [2024-12-15 05:47:53.685839] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.239 [2024-12-15 05:47:53.716426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.239  [2024-12-15T05:47:54.139Z] Copying: 512/512 [B] (average 500 kBps) 00:07:32.498 00:07:32.498 05:47:53 -- dd/posix.sh@93 -- # [[ 8bq2sa7w33xreormpujknl44qt8azgexpmj3eyg5nlg41egvarmjiec7x9yw3z8sf6jxhoyoho5lhbohnyj0ey6qdhqv9se1k5bbj9ezl417i8pwcll3j0wga7creppl47ml29brxchv00h50k8df2ja07l4hut1drpjcy95yko89240g9uzfugb8wh84me1krts16a1scgbcjnv3ijc44yx47d1coy8bkvq4x0mliudwdg4l085mi781khihc3q9mjknheah5hcok62vxin5duz7e8tkjrn2mzd2g50ld9zlkxc89ykd0io9e9um1vtbuaj33uvfuqiigd1l15t5nqjfad00c0wcip2dq1lmco07i33khm51ta2j69sz4zu4i1wyt713lrpxejp0ij3ywiu9gu3oqe1r23w8eeujsow1v9g66jqsos38o3ay4o48p3my1ubi1g48kvaaeqkh9tn961i95ac704a20k2vkabvi37gpcf7mybruqtm0gm == \8\b\q\2\s\a\7\w\3\3\x\r\e\o\r\m\p\u\j\k\n\l\4\4\q\t\8\a\z\g\e\x\p\m\j\3\e\y\g\5\n\l\g\4\1\e\g\v\a\r\m\j\i\e\c\7\x\9\y\w\3\z\8\s\f\6\j\x\h\o\y\o\h\o\5\l\h\b\o\h\n\y\j\0\e\y\6\q\d\h\q\v\9\s\e\1\k\5\b\b\j\9\e\z\l\4\1\7\i\8\p\w\c\l\l\3\j\0\w\g\a\7\c\r\e\p\p\l\4\7\m\l\2\9\b\r\x\c\h\v\0\0\h\5\0\k\8\d\f\2\j\a\0\7\l\4\h\u\t\1\d\r\p\j\c\y\9\5\y\k\o\8\9\2\4\0\g\9\u\z\f\u\g\b\8\w\h\8\4\m\e\1\k\r\t\s\1\6\a\1\s\c\g\b\c\j\n\v\3\i\j\c\4\4\y\x\4\7\d\1\c\o\y\8\b\k\v\q\4\x\0\m\l\i\u\d\w\d\g\4\l\0\8\5\m\i\7\8\1\k\h\i\h\c\3\q\9\m\j\k\n\h\e\a\h\5\h\c\o\k\6\2\v\x\i\n\5\d\u\z\7\e\8\t\k\j\r\n\2\m\z\d\2\g\5\0\l\d\9\z\l\k\x\c\8\9\y\k\d\0\i\o\9\e\9\u\m\1\v\t\b\u\a\j\3\3\u\v\f\u\q\i\i\g\d\1\l\1\5\t\5\n\q\j\f\a\d\0\0\c\0\w\c\i\p\2\d\q\1\l\m\c\o\0\7\i\3\3\k\h\m\5\1\t\a\2\j\6\9\s\z\4\z\u\4\i\1\w\y\t\7\1\3\l\r\p\x\e\j\p\0\i\j\3\y\w\i\u\9\g\u\3\o\q\e\1\r\2\3\w\8\e\e\u\j\s\o\w\1\v\9\g\6\6\j\q\s\o\s\3\8\o\3\a\y\4\o\4\8\p\3\m\y\1\u\b\i\1\g\4\8\k\v\a\a\e\q\k\h\9\t\n\9\6\1\i\9\5\a\c\7\0\4\a\2\0\k\2\v\k\a\b\v\i\3\7\g\p\c\f\7\m\y\b\r\u\q\t\m\0\g\m ]] 00:07:32.498 05:47:53 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:32.498 05:47:53 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:32.498 [2024-12-15 05:47:53.945807] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:32.498 [2024-12-15 05:47:53.945929] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69983 ] 00:07:32.498 [2024-12-15 05:47:54.080518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.498 [2024-12-15 05:47:54.111658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.757  [2024-12-15T05:47:54.398Z] Copying: 512/512 [B] (average 100 kBps) 00:07:32.757 00:07:32.757 05:47:54 -- dd/posix.sh@93 -- # [[ 8bq2sa7w33xreormpujknl44qt8azgexpmj3eyg5nlg41egvarmjiec7x9yw3z8sf6jxhoyoho5lhbohnyj0ey6qdhqv9se1k5bbj9ezl417i8pwcll3j0wga7creppl47ml29brxchv00h50k8df2ja07l4hut1drpjcy95yko89240g9uzfugb8wh84me1krts16a1scgbcjnv3ijc44yx47d1coy8bkvq4x0mliudwdg4l085mi781khihc3q9mjknheah5hcok62vxin5duz7e8tkjrn2mzd2g50ld9zlkxc89ykd0io9e9um1vtbuaj33uvfuqiigd1l15t5nqjfad00c0wcip2dq1lmco07i33khm51ta2j69sz4zu4i1wyt713lrpxejp0ij3ywiu9gu3oqe1r23w8eeujsow1v9g66jqsos38o3ay4o48p3my1ubi1g48kvaaeqkh9tn961i95ac704a20k2vkabvi37gpcf7mybruqtm0gm == \8\b\q\2\s\a\7\w\3\3\x\r\e\o\r\m\p\u\j\k\n\l\4\4\q\t\8\a\z\g\e\x\p\m\j\3\e\y\g\5\n\l\g\4\1\e\g\v\a\r\m\j\i\e\c\7\x\9\y\w\3\z\8\s\f\6\j\x\h\o\y\o\h\o\5\l\h\b\o\h\n\y\j\0\e\y\6\q\d\h\q\v\9\s\e\1\k\5\b\b\j\9\e\z\l\4\1\7\i\8\p\w\c\l\l\3\j\0\w\g\a\7\c\r\e\p\p\l\4\7\m\l\2\9\b\r\x\c\h\v\0\0\h\5\0\k\8\d\f\2\j\a\0\7\l\4\h\u\t\1\d\r\p\j\c\y\9\5\y\k\o\8\9\2\4\0\g\9\u\z\f\u\g\b\8\w\h\8\4\m\e\1\k\r\t\s\1\6\a\1\s\c\g\b\c\j\n\v\3\i\j\c\4\4\y\x\4\7\d\1\c\o\y\8\b\k\v\q\4\x\0\m\l\i\u\d\w\d\g\4\l\0\8\5\m\i\7\8\1\k\h\i\h\c\3\q\9\m\j\k\n\h\e\a\h\5\h\c\o\k\6\2\v\x\i\n\5\d\u\z\7\e\8\t\k\j\r\n\2\m\z\d\2\g\5\0\l\d\9\z\l\k\x\c\8\9\y\k\d\0\i\o\9\e\9\u\m\1\v\t\b\u\a\j\3\3\u\v\f\u\q\i\i\g\d\1\l\1\5\t\5\n\q\j\f\a\d\0\0\c\0\w\c\i\p\2\d\q\1\l\m\c\o\0\7\i\3\3\k\h\m\5\1\t\a\2\j\6\9\s\z\4\z\u\4\i\1\w\y\t\7\1\3\l\r\p\x\e\j\p\0\i\j\3\y\w\i\u\9\g\u\3\o\q\e\1\r\2\3\w\8\e\e\u\j\s\o\w\1\v\9\g\6\6\j\q\s\o\s\3\8\o\3\a\y\4\o\4\8\p\3\m\y\1\u\b\i\1\g\4\8\k\v\a\a\e\q\k\h\9\t\n\9\6\1\i\9\5\a\c\7\0\4\a\2\0\k\2\v\k\a\b\v\i\3\7\g\p\c\f\7\m\y\b\r\u\q\t\m\0\g\m ]] 00:07:32.757 05:47:54 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:32.757 05:47:54 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:32.757 [2024-12-15 05:47:54.361242] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:32.757 [2024-12-15 05:47:54.361349] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69985 ] 00:07:33.016 [2024-12-15 05:47:54.493898] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.016 [2024-12-15 05:47:54.527503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.016  [2024-12-15T05:47:54.916Z] Copying: 512/512 [B] (average 250 kBps) 00:07:33.275 00:07:33.275 05:47:54 -- dd/posix.sh@93 -- # [[ 8bq2sa7w33xreormpujknl44qt8azgexpmj3eyg5nlg41egvarmjiec7x9yw3z8sf6jxhoyoho5lhbohnyj0ey6qdhqv9se1k5bbj9ezl417i8pwcll3j0wga7creppl47ml29brxchv00h50k8df2ja07l4hut1drpjcy95yko89240g9uzfugb8wh84me1krts16a1scgbcjnv3ijc44yx47d1coy8bkvq4x0mliudwdg4l085mi781khihc3q9mjknheah5hcok62vxin5duz7e8tkjrn2mzd2g50ld9zlkxc89ykd0io9e9um1vtbuaj33uvfuqiigd1l15t5nqjfad00c0wcip2dq1lmco07i33khm51ta2j69sz4zu4i1wyt713lrpxejp0ij3ywiu9gu3oqe1r23w8eeujsow1v9g66jqsos38o3ay4o48p3my1ubi1g48kvaaeqkh9tn961i95ac704a20k2vkabvi37gpcf7mybruqtm0gm == \8\b\q\2\s\a\7\w\3\3\x\r\e\o\r\m\p\u\j\k\n\l\4\4\q\t\8\a\z\g\e\x\p\m\j\3\e\y\g\5\n\l\g\4\1\e\g\v\a\r\m\j\i\e\c\7\x\9\y\w\3\z\8\s\f\6\j\x\h\o\y\o\h\o\5\l\h\b\o\h\n\y\j\0\e\y\6\q\d\h\q\v\9\s\e\1\k\5\b\b\j\9\e\z\l\4\1\7\i\8\p\w\c\l\l\3\j\0\w\g\a\7\c\r\e\p\p\l\4\7\m\l\2\9\b\r\x\c\h\v\0\0\h\5\0\k\8\d\f\2\j\a\0\7\l\4\h\u\t\1\d\r\p\j\c\y\9\5\y\k\o\8\9\2\4\0\g\9\u\z\f\u\g\b\8\w\h\8\4\m\e\1\k\r\t\s\1\6\a\1\s\c\g\b\c\j\n\v\3\i\j\c\4\4\y\x\4\7\d\1\c\o\y\8\b\k\v\q\4\x\0\m\l\i\u\d\w\d\g\4\l\0\8\5\m\i\7\8\1\k\h\i\h\c\3\q\9\m\j\k\n\h\e\a\h\5\h\c\o\k\6\2\v\x\i\n\5\d\u\z\7\e\8\t\k\j\r\n\2\m\z\d\2\g\5\0\l\d\9\z\l\k\x\c\8\9\y\k\d\0\i\o\9\e\9\u\m\1\v\t\b\u\a\j\3\3\u\v\f\u\q\i\i\g\d\1\l\1\5\t\5\n\q\j\f\a\d\0\0\c\0\w\c\i\p\2\d\q\1\l\m\c\o\0\7\i\3\3\k\h\m\5\1\t\a\2\j\6\9\s\z\4\z\u\4\i\1\w\y\t\7\1\3\l\r\p\x\e\j\p\0\i\j\3\y\w\i\u\9\g\u\3\o\q\e\1\r\2\3\w\8\e\e\u\j\s\o\w\1\v\9\g\6\6\j\q\s\o\s\3\8\o\3\a\y\4\o\4\8\p\3\m\y\1\u\b\i\1\g\4\8\k\v\a\a\e\q\k\h\9\t\n\9\6\1\i\9\5\a\c\7\0\4\a\2\0\k\2\v\k\a\b\v\i\3\7\g\p\c\f\7\m\y\b\r\u\q\t\m\0\g\m ]] 00:07:33.275 05:47:54 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:33.275 05:47:54 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:33.275 05:47:54 -- dd/common.sh@98 -- # xtrace_disable 00:07:33.275 05:47:54 -- common/autotest_common.sh@10 -- # set +x 00:07:33.275 05:47:54 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:33.275 05:47:54 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:33.275 [2024-12-15 05:47:54.757649] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:33.275 [2024-12-15 05:47:54.757745] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69998 ] 00:07:33.275 [2024-12-15 05:47:54.891104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.534 [2024-12-15 05:47:54.922254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.534  [2024-12-15T05:47:55.175Z] Copying: 512/512 [B] (average 500 kBps) 00:07:33.534 00:07:33.534 05:47:55 -- dd/posix.sh@93 -- # [[ x8cej6x2htbdtsnqxj85gps7lnw5c7y2ct0k62mboha62mx5v4dnunjlwzbnfzpq1tdqxjxeyyl8ky1068d27f8rgs4ctyqjwrqtxpm4jdrl4czw7c90i0rz8rjhkw9qbxdnq1d3aj16yqb6z82nhuqt8thd5goq5psng6zirvkhcitrxuz00fkzyps4m6tfh2lit1mhmpwcafs1rhb4tx01tumtgvwmmyphqjah5ggqmhvwtcrwqhhq9a4g72bue9w4g2o654fazbueob50itaiwkv8vihl4u52fio8io7raep5faptqnmt6k6zmve68p5qjpobdfhhhnxdfjth9eotp8kjl4dshz8cp9s084fauv9wogqn50xvi4fgprlpy28lfrc3r7rqdw9mfb8k5mt9nll4g3wuxgjbgxsgg5ri17nq1ryuyzc8vt4f9irgr7j98alui8htnj9petuedu7daipi7bksiomwm60w7p2khj9g948pz0wf8mivf8xj == \x\8\c\e\j\6\x\2\h\t\b\d\t\s\n\q\x\j\8\5\g\p\s\7\l\n\w\5\c\7\y\2\c\t\0\k\6\2\m\b\o\h\a\6\2\m\x\5\v\4\d\n\u\n\j\l\w\z\b\n\f\z\p\q\1\t\d\q\x\j\x\e\y\y\l\8\k\y\1\0\6\8\d\2\7\f\8\r\g\s\4\c\t\y\q\j\w\r\q\t\x\p\m\4\j\d\r\l\4\c\z\w\7\c\9\0\i\0\r\z\8\r\j\h\k\w\9\q\b\x\d\n\q\1\d\3\a\j\1\6\y\q\b\6\z\8\2\n\h\u\q\t\8\t\h\d\5\g\o\q\5\p\s\n\g\6\z\i\r\v\k\h\c\i\t\r\x\u\z\0\0\f\k\z\y\p\s\4\m\6\t\f\h\2\l\i\t\1\m\h\m\p\w\c\a\f\s\1\r\h\b\4\t\x\0\1\t\u\m\t\g\v\w\m\m\y\p\h\q\j\a\h\5\g\g\q\m\h\v\w\t\c\r\w\q\h\h\q\9\a\4\g\7\2\b\u\e\9\w\4\g\2\o\6\5\4\f\a\z\b\u\e\o\b\5\0\i\t\a\i\w\k\v\8\v\i\h\l\4\u\5\2\f\i\o\8\i\o\7\r\a\e\p\5\f\a\p\t\q\n\m\t\6\k\6\z\m\v\e\6\8\p\5\q\j\p\o\b\d\f\h\h\h\n\x\d\f\j\t\h\9\e\o\t\p\8\k\j\l\4\d\s\h\z\8\c\p\9\s\0\8\4\f\a\u\v\9\w\o\g\q\n\5\0\x\v\i\4\f\g\p\r\l\p\y\2\8\l\f\r\c\3\r\7\r\q\d\w\9\m\f\b\8\k\5\m\t\9\n\l\l\4\g\3\w\u\x\g\j\b\g\x\s\g\g\5\r\i\1\7\n\q\1\r\y\u\y\z\c\8\v\t\4\f\9\i\r\g\r\7\j\9\8\a\l\u\i\8\h\t\n\j\9\p\e\t\u\e\d\u\7\d\a\i\p\i\7\b\k\s\i\o\m\w\m\6\0\w\7\p\2\k\h\j\9\g\9\4\8\p\z\0\w\f\8\m\i\v\f\8\x\j ]] 00:07:33.534 05:47:55 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:33.534 05:47:55 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:33.534 [2024-12-15 05:47:55.152866] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:33.534 [2024-12-15 05:47:55.152983] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70000 ] 00:07:33.793 [2024-12-15 05:47:55.287884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.793 [2024-12-15 05:47:55.318594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.793  [2024-12-15T05:47:55.693Z] Copying: 512/512 [B] (average 500 kBps) 00:07:34.052 00:07:34.052 05:47:55 -- dd/posix.sh@93 -- # [[ x8cej6x2htbdtsnqxj85gps7lnw5c7y2ct0k62mboha62mx5v4dnunjlwzbnfzpq1tdqxjxeyyl8ky1068d27f8rgs4ctyqjwrqtxpm4jdrl4czw7c90i0rz8rjhkw9qbxdnq1d3aj16yqb6z82nhuqt8thd5goq5psng6zirvkhcitrxuz00fkzyps4m6tfh2lit1mhmpwcafs1rhb4tx01tumtgvwmmyphqjah5ggqmhvwtcrwqhhq9a4g72bue9w4g2o654fazbueob50itaiwkv8vihl4u52fio8io7raep5faptqnmt6k6zmve68p5qjpobdfhhhnxdfjth9eotp8kjl4dshz8cp9s084fauv9wogqn50xvi4fgprlpy28lfrc3r7rqdw9mfb8k5mt9nll4g3wuxgjbgxsgg5ri17nq1ryuyzc8vt4f9irgr7j98alui8htnj9petuedu7daipi7bksiomwm60w7p2khj9g948pz0wf8mivf8xj == \x\8\c\e\j\6\x\2\h\t\b\d\t\s\n\q\x\j\8\5\g\p\s\7\l\n\w\5\c\7\y\2\c\t\0\k\6\2\m\b\o\h\a\6\2\m\x\5\v\4\d\n\u\n\j\l\w\z\b\n\f\z\p\q\1\t\d\q\x\j\x\e\y\y\l\8\k\y\1\0\6\8\d\2\7\f\8\r\g\s\4\c\t\y\q\j\w\r\q\t\x\p\m\4\j\d\r\l\4\c\z\w\7\c\9\0\i\0\r\z\8\r\j\h\k\w\9\q\b\x\d\n\q\1\d\3\a\j\1\6\y\q\b\6\z\8\2\n\h\u\q\t\8\t\h\d\5\g\o\q\5\p\s\n\g\6\z\i\r\v\k\h\c\i\t\r\x\u\z\0\0\f\k\z\y\p\s\4\m\6\t\f\h\2\l\i\t\1\m\h\m\p\w\c\a\f\s\1\r\h\b\4\t\x\0\1\t\u\m\t\g\v\w\m\m\y\p\h\q\j\a\h\5\g\g\q\m\h\v\w\t\c\r\w\q\h\h\q\9\a\4\g\7\2\b\u\e\9\w\4\g\2\o\6\5\4\f\a\z\b\u\e\o\b\5\0\i\t\a\i\w\k\v\8\v\i\h\l\4\u\5\2\f\i\o\8\i\o\7\r\a\e\p\5\f\a\p\t\q\n\m\t\6\k\6\z\m\v\e\6\8\p\5\q\j\p\o\b\d\f\h\h\h\n\x\d\f\j\t\h\9\e\o\t\p\8\k\j\l\4\d\s\h\z\8\c\p\9\s\0\8\4\f\a\u\v\9\w\o\g\q\n\5\0\x\v\i\4\f\g\p\r\l\p\y\2\8\l\f\r\c\3\r\7\r\q\d\w\9\m\f\b\8\k\5\m\t\9\n\l\l\4\g\3\w\u\x\g\j\b\g\x\s\g\g\5\r\i\1\7\n\q\1\r\y\u\y\z\c\8\v\t\4\f\9\i\r\g\r\7\j\9\8\a\l\u\i\8\h\t\n\j\9\p\e\t\u\e\d\u\7\d\a\i\p\i\7\b\k\s\i\o\m\w\m\6\0\w\7\p\2\k\h\j\9\g\9\4\8\p\z\0\w\f\8\m\i\v\f\8\x\j ]] 00:07:34.052 05:47:55 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:34.052 05:47:55 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:34.052 [2024-12-15 05:47:55.555416] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:34.052 [2024-12-15 05:47:55.555511] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70008 ] 00:07:34.311 [2024-12-15 05:47:55.696054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.311 [2024-12-15 05:47:55.726817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.311  [2024-12-15T05:47:55.952Z] Copying: 512/512 [B] (average 500 kBps) 00:07:34.311 00:07:34.311 05:47:55 -- dd/posix.sh@93 -- # [[ x8cej6x2htbdtsnqxj85gps7lnw5c7y2ct0k62mboha62mx5v4dnunjlwzbnfzpq1tdqxjxeyyl8ky1068d27f8rgs4ctyqjwrqtxpm4jdrl4czw7c90i0rz8rjhkw9qbxdnq1d3aj16yqb6z82nhuqt8thd5goq5psng6zirvkhcitrxuz00fkzyps4m6tfh2lit1mhmpwcafs1rhb4tx01tumtgvwmmyphqjah5ggqmhvwtcrwqhhq9a4g72bue9w4g2o654fazbueob50itaiwkv8vihl4u52fio8io7raep5faptqnmt6k6zmve68p5qjpobdfhhhnxdfjth9eotp8kjl4dshz8cp9s084fauv9wogqn50xvi4fgprlpy28lfrc3r7rqdw9mfb8k5mt9nll4g3wuxgjbgxsgg5ri17nq1ryuyzc8vt4f9irgr7j98alui8htnj9petuedu7daipi7bksiomwm60w7p2khj9g948pz0wf8mivf8xj == \x\8\c\e\j\6\x\2\h\t\b\d\t\s\n\q\x\j\8\5\g\p\s\7\l\n\w\5\c\7\y\2\c\t\0\k\6\2\m\b\o\h\a\6\2\m\x\5\v\4\d\n\u\n\j\l\w\z\b\n\f\z\p\q\1\t\d\q\x\j\x\e\y\y\l\8\k\y\1\0\6\8\d\2\7\f\8\r\g\s\4\c\t\y\q\j\w\r\q\t\x\p\m\4\j\d\r\l\4\c\z\w\7\c\9\0\i\0\r\z\8\r\j\h\k\w\9\q\b\x\d\n\q\1\d\3\a\j\1\6\y\q\b\6\z\8\2\n\h\u\q\t\8\t\h\d\5\g\o\q\5\p\s\n\g\6\z\i\r\v\k\h\c\i\t\r\x\u\z\0\0\f\k\z\y\p\s\4\m\6\t\f\h\2\l\i\t\1\m\h\m\p\w\c\a\f\s\1\r\h\b\4\t\x\0\1\t\u\m\t\g\v\w\m\m\y\p\h\q\j\a\h\5\g\g\q\m\h\v\w\t\c\r\w\q\h\h\q\9\a\4\g\7\2\b\u\e\9\w\4\g\2\o\6\5\4\f\a\z\b\u\e\o\b\5\0\i\t\a\i\w\k\v\8\v\i\h\l\4\u\5\2\f\i\o\8\i\o\7\r\a\e\p\5\f\a\p\t\q\n\m\t\6\k\6\z\m\v\e\6\8\p\5\q\j\p\o\b\d\f\h\h\h\n\x\d\f\j\t\h\9\e\o\t\p\8\k\j\l\4\d\s\h\z\8\c\p\9\s\0\8\4\f\a\u\v\9\w\o\g\q\n\5\0\x\v\i\4\f\g\p\r\l\p\y\2\8\l\f\r\c\3\r\7\r\q\d\w\9\m\f\b\8\k\5\m\t\9\n\l\l\4\g\3\w\u\x\g\j\b\g\x\s\g\g\5\r\i\1\7\n\q\1\r\y\u\y\z\c\8\v\t\4\f\9\i\r\g\r\7\j\9\8\a\l\u\i\8\h\t\n\j\9\p\e\t\u\e\d\u\7\d\a\i\p\i\7\b\k\s\i\o\m\w\m\6\0\w\7\p\2\k\h\j\9\g\9\4\8\p\z\0\w\f\8\m\i\v\f\8\x\j ]] 00:07:34.311 05:47:55 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:34.311 05:47:55 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:34.570 [2024-12-15 05:47:55.959538] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:34.570 [2024-12-15 05:47:55.959658] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70015 ] 00:07:34.570 [2024-12-15 05:47:56.093772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.570 [2024-12-15 05:47:56.124242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.570  [2024-12-15T05:47:56.470Z] Copying: 512/512 [B] (average 500 kBps) 00:07:34.829 00:07:34.829 05:47:56 -- dd/posix.sh@93 -- # [[ x8cej6x2htbdtsnqxj85gps7lnw5c7y2ct0k62mboha62mx5v4dnunjlwzbnfzpq1tdqxjxeyyl8ky1068d27f8rgs4ctyqjwrqtxpm4jdrl4czw7c90i0rz8rjhkw9qbxdnq1d3aj16yqb6z82nhuqt8thd5goq5psng6zirvkhcitrxuz00fkzyps4m6tfh2lit1mhmpwcafs1rhb4tx01tumtgvwmmyphqjah5ggqmhvwtcrwqhhq9a4g72bue9w4g2o654fazbueob50itaiwkv8vihl4u52fio8io7raep5faptqnmt6k6zmve68p5qjpobdfhhhnxdfjth9eotp8kjl4dshz8cp9s084fauv9wogqn50xvi4fgprlpy28lfrc3r7rqdw9mfb8k5mt9nll4g3wuxgjbgxsgg5ri17nq1ryuyzc8vt4f9irgr7j98alui8htnj9petuedu7daipi7bksiomwm60w7p2khj9g948pz0wf8mivf8xj == \x\8\c\e\j\6\x\2\h\t\b\d\t\s\n\q\x\j\8\5\g\p\s\7\l\n\w\5\c\7\y\2\c\t\0\k\6\2\m\b\o\h\a\6\2\m\x\5\v\4\d\n\u\n\j\l\w\z\b\n\f\z\p\q\1\t\d\q\x\j\x\e\y\y\l\8\k\y\1\0\6\8\d\2\7\f\8\r\g\s\4\c\t\y\q\j\w\r\q\t\x\p\m\4\j\d\r\l\4\c\z\w\7\c\9\0\i\0\r\z\8\r\j\h\k\w\9\q\b\x\d\n\q\1\d\3\a\j\1\6\y\q\b\6\z\8\2\n\h\u\q\t\8\t\h\d\5\g\o\q\5\p\s\n\g\6\z\i\r\v\k\h\c\i\t\r\x\u\z\0\0\f\k\z\y\p\s\4\m\6\t\f\h\2\l\i\t\1\m\h\m\p\w\c\a\f\s\1\r\h\b\4\t\x\0\1\t\u\m\t\g\v\w\m\m\y\p\h\q\j\a\h\5\g\g\q\m\h\v\w\t\c\r\w\q\h\h\q\9\a\4\g\7\2\b\u\e\9\w\4\g\2\o\6\5\4\f\a\z\b\u\e\o\b\5\0\i\t\a\i\w\k\v\8\v\i\h\l\4\u\5\2\f\i\o\8\i\o\7\r\a\e\p\5\f\a\p\t\q\n\m\t\6\k\6\z\m\v\e\6\8\p\5\q\j\p\o\b\d\f\h\h\h\n\x\d\f\j\t\h\9\e\o\t\p\8\k\j\l\4\d\s\h\z\8\c\p\9\s\0\8\4\f\a\u\v\9\w\o\g\q\n\5\0\x\v\i\4\f\g\p\r\l\p\y\2\8\l\f\r\c\3\r\7\r\q\d\w\9\m\f\b\8\k\5\m\t\9\n\l\l\4\g\3\w\u\x\g\j\b\g\x\s\g\g\5\r\i\1\7\n\q\1\r\y\u\y\z\c\8\v\t\4\f\9\i\r\g\r\7\j\9\8\a\l\u\i\8\h\t\n\j\9\p\e\t\u\e\d\u\7\d\a\i\p\i\7\b\k\s\i\o\m\w\m\6\0\w\7\p\2\k\h\j\9\g\9\4\8\p\z\0\w\f\8\m\i\v\f\8\x\j ]] 00:07:34.829 00:07:34.829 real 0m3.223s 00:07:34.829 user 0m1.509s 00:07:34.829 sys 0m0.723s 00:07:34.829 ************************************ 00:07:34.829 END TEST dd_flags_misc 00:07:34.829 ************************************ 00:07:34.829 05:47:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:34.829 05:47:56 -- common/autotest_common.sh@10 -- # set +x 00:07:34.829 05:47:56 -- dd/posix.sh@131 -- # tests_forced_aio 00:07:34.829 05:47:56 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:34.829 * Second test run, disabling liburing, forcing AIO 00:07:34.829 05:47:56 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:34.829 05:47:56 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:34.829 05:47:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:34.829 05:47:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.829 05:47:56 -- common/autotest_common.sh@10 -- # set +x 00:07:34.829 ************************************ 00:07:34.829 START TEST dd_flag_append_forced_aio 00:07:34.829 ************************************ 00:07:34.829 05:47:56 -- common/autotest_common.sh@1114 -- # append 00:07:34.829 05:47:56 -- dd/posix.sh@16 -- # local dump0 00:07:34.829 05:47:56 -- dd/posix.sh@17 -- # local dump1 00:07:34.829 05:47:56 -- dd/posix.sh@19 -- # gen_bytes 32 00:07:34.829 05:47:56 -- dd/common.sh@98 -- # xtrace_disable 00:07:34.829 05:47:56 -- common/autotest_common.sh@10 -- # set +x 00:07:34.829 05:47:56 -- dd/posix.sh@19 -- # dump0=uitmllf7tw7xg0fwx3ulorzfhn0iprc2 00:07:34.829 05:47:56 -- dd/posix.sh@20 -- # gen_bytes 32 00:07:34.829 05:47:56 -- dd/common.sh@98 -- # xtrace_disable 00:07:34.829 05:47:56 -- common/autotest_common.sh@10 -- # set +x 00:07:34.829 05:47:56 -- dd/posix.sh@20 -- # dump1=z2baiafc2qb926rzuk5e332shl66xfy1 00:07:34.829 05:47:56 -- dd/posix.sh@22 -- # printf %s uitmllf7tw7xg0fwx3ulorzfhn0iprc2 00:07:34.829 05:47:56 -- dd/posix.sh@23 -- # printf %s z2baiafc2qb926rzuk5e332shl66xfy1 00:07:34.829 05:47:56 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:34.829 [2024-12-15 05:47:56.422568] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:34.829 [2024-12-15 05:47:56.422669] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70036 ] 00:07:35.089 [2024-12-15 05:47:56.560014] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.089 [2024-12-15 05:47:56.592177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.089  [2024-12-15T05:47:56.989Z] Copying: 32/32 [B] (average 31 kBps) 00:07:35.348 00:07:35.348 05:47:56 -- dd/posix.sh@27 -- # [[ z2baiafc2qb926rzuk5e332shl66xfy1uitmllf7tw7xg0fwx3ulorzfhn0iprc2 == \z\2\b\a\i\a\f\c\2\q\b\9\2\6\r\z\u\k\5\e\3\3\2\s\h\l\6\6\x\f\y\1\u\i\t\m\l\l\f\7\t\w\7\x\g\0\f\w\x\3\u\l\o\r\z\f\h\n\0\i\p\r\c\2 ]] 00:07:35.348 00:07:35.348 real 0m0.421s 00:07:35.348 user 0m0.215s 00:07:35.348 sys 0m0.086s 00:07:35.348 05:47:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:35.348 05:47:56 -- common/autotest_common.sh@10 -- # set +x 00:07:35.348 ************************************ 00:07:35.348 END TEST dd_flag_append_forced_aio 00:07:35.348 ************************************ 00:07:35.348 05:47:56 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:35.348 05:47:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:35.348 05:47:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.348 05:47:56 -- common/autotest_common.sh@10 -- # set +x 00:07:35.348 ************************************ 00:07:35.348 START TEST dd_flag_directory_forced_aio 00:07:35.348 ************************************ 00:07:35.348 05:47:56 -- common/autotest_common.sh@1114 -- # directory 00:07:35.348 05:47:56 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:35.348 05:47:56 -- common/autotest_common.sh@650 -- # local es=0 00:07:35.348 05:47:56 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:35.348 05:47:56 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.348 05:47:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.348 05:47:56 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.348 05:47:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.348 05:47:56 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.348 05:47:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.348 05:47:56 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.348 05:47:56 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:35.348 05:47:56 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:35.348 [2024-12-15 05:47:56.888970] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:35.348 [2024-12-15 05:47:56.889072] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70068 ] 00:07:35.607 [2024-12-15 05:47:57.024458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.607 [2024-12-15 05:47:57.055406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.607 [2024-12-15 05:47:57.096325] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:35.607 [2024-12-15 05:47:57.096396] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:35.607 [2024-12-15 05:47:57.096409] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.607 [2024-12-15 05:47:57.162046] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:35.607 05:47:57 -- common/autotest_common.sh@653 -- # es=236 00:07:35.607 05:47:57 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:35.607 05:47:57 -- common/autotest_common.sh@662 -- # es=108 00:07:35.607 05:47:57 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:35.607 05:47:57 -- common/autotest_common.sh@670 -- # es=1 00:07:35.607 05:47:57 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:35.608 05:47:57 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:35.608 05:47:57 -- common/autotest_common.sh@650 -- # local es=0 00:07:35.608 05:47:57 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:35.608 05:47:57 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.608 05:47:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.608 05:47:57 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.608 05:47:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.608 05:47:57 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.608 05:47:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.608 05:47:57 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.608 05:47:57 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:35.608 05:47:57 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:35.867 [2024-12-15 05:47:57.280810] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:35.867 [2024-12-15 05:47:57.280947] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70072 ] 00:07:35.867 [2024-12-15 05:47:57.414508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.867 [2024-12-15 05:47:57.452266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.867 [2024-12-15 05:47:57.498165] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:35.867 [2024-12-15 05:47:57.498274] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:35.867 [2024-12-15 05:47:57.498304] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:36.127 [2024-12-15 05:47:57.561535] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:36.127 05:47:57 -- common/autotest_common.sh@653 -- # es=236 00:07:36.127 05:47:57 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:36.127 05:47:57 -- common/autotest_common.sh@662 -- # es=108 00:07:36.127 05:47:57 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:36.127 05:47:57 -- common/autotest_common.sh@670 -- # es=1 00:07:36.127 05:47:57 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:36.127 00:07:36.127 real 0m0.798s 00:07:36.127 user 0m0.392s 00:07:36.127 sys 0m0.192s 00:07:36.127 05:47:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:36.127 05:47:57 -- common/autotest_common.sh@10 -- # set +x 00:07:36.127 ************************************ 00:07:36.127 END TEST dd_flag_directory_forced_aio 00:07:36.127 ************************************ 00:07:36.127 05:47:57 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:36.127 05:47:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:36.127 05:47:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.127 05:47:57 -- common/autotest_common.sh@10 -- # set +x 00:07:36.127 ************************************ 00:07:36.127 START TEST dd_flag_nofollow_forced_aio 00:07:36.127 ************************************ 00:07:36.127 05:47:57 -- common/autotest_common.sh@1114 -- # nofollow 00:07:36.127 05:47:57 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:36.127 05:47:57 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:36.127 05:47:57 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:36.127 05:47:57 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:36.127 05:47:57 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:36.127 05:47:57 -- common/autotest_common.sh@650 -- # local es=0 00:07:36.127 05:47:57 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:36.127 05:47:57 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.127 05:47:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.127 05:47:57 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.127 05:47:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.127 05:47:57 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.127 05:47:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.127 05:47:57 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.127 05:47:57 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:36.127 05:47:57 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:36.127 [2024-12-15 05:47:57.755524] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:36.127 [2024-12-15 05:47:57.755700] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70106 ] 00:07:36.387 [2024-12-15 05:47:57.892956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.387 [2024-12-15 05:47:57.926383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.387 [2024-12-15 05:47:57.968631] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:36.387 [2024-12-15 05:47:57.968703] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:36.387 [2024-12-15 05:47:57.968732] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:36.646 [2024-12-15 05:47:58.029047] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:36.646 05:47:58 -- common/autotest_common.sh@653 -- # es=216 00:07:36.646 05:47:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:36.647 05:47:58 -- common/autotest_common.sh@662 -- # es=88 00:07:36.647 05:47:58 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:36.647 05:47:58 -- common/autotest_common.sh@670 -- # es=1 00:07:36.647 05:47:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:36.647 05:47:58 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:36.647 05:47:58 -- common/autotest_common.sh@650 -- # local es=0 00:07:36.647 05:47:58 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:36.647 05:47:58 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.647 05:47:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.647 05:47:58 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.647 05:47:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.647 05:47:58 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.647 05:47:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.647 05:47:58 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.647 05:47:58 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:36.647 05:47:58 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:36.647 [2024-12-15 05:47:58.141361] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:36.647 [2024-12-15 05:47:58.141482] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70110 ] 00:07:36.647 [2024-12-15 05:47:58.270983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.906 [2024-12-15 05:47:58.305717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.906 [2024-12-15 05:47:58.351848] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:36.906 [2024-12-15 05:47:58.351932] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:36.906 [2024-12-15 05:47:58.351963] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:36.906 [2024-12-15 05:47:58.411627] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:36.906 05:47:58 -- common/autotest_common.sh@653 -- # es=216 00:07:36.906 05:47:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:36.906 05:47:58 -- common/autotest_common.sh@662 -- # es=88 00:07:36.906 05:47:58 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:36.906 05:47:58 -- common/autotest_common.sh@670 -- # es=1 00:07:36.906 05:47:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:36.906 05:47:58 -- dd/posix.sh@46 -- # gen_bytes 512 00:07:36.906 05:47:58 -- dd/common.sh@98 -- # xtrace_disable 00:07:36.906 05:47:58 -- common/autotest_common.sh@10 -- # set +x 00:07:36.906 05:47:58 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:37.165 [2024-12-15 05:47:58.550903] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:37.165 [2024-12-15 05:47:58.551032] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70118 ] 00:07:37.165 [2024-12-15 05:47:58.686837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.165 [2024-12-15 05:47:58.720815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.166  [2024-12-15T05:47:59.066Z] Copying: 512/512 [B] (average 500 kBps) 00:07:37.425 00:07:37.425 05:47:58 -- dd/posix.sh@49 -- # [[ 0pfbrc7meg3r9lb981wvf4vgbmrjahzm418nn040tguj213up5iw6zyafgz2xq87lrzdvcuki1jahu1qdbdgfcyyhf47ixl0gkqrb7klsb83qg5nd6xrngg46gmq6gfsr3fnvq4jd4elp4re2d28qz4o0p5m6i8truh5v669nw7ngfvslt10ehtlt83sgezqjofnbp9d08lo793e0q0sx4to09a780hztgitkz4v4lcoz7ym6c3ow4ti8ktsfq4qih2wl9pzxozg3va04m7mnvctdumsbljhs8aho8ziuwtiud19aqt6cmwrgu4imje2unoo49qhvjzc3gpk4t50r96vpb8vrjn41cxohonh556zaaw65dsa6z1xyucexdv432egt2suj95tc16clht1gcil92yzw4jumrjqrlwemygdvx7jmdg2pjmxs417id5mwaiiavtwqovoonglp29atmt9jpude8fiz73wn00rr3y77uvylwj9xyo9vevf02we == \0\p\f\b\r\c\7\m\e\g\3\r\9\l\b\9\8\1\w\v\f\4\v\g\b\m\r\j\a\h\z\m\4\1\8\n\n\0\4\0\t\g\u\j\2\1\3\u\p\5\i\w\6\z\y\a\f\g\z\2\x\q\8\7\l\r\z\d\v\c\u\k\i\1\j\a\h\u\1\q\d\b\d\g\f\c\y\y\h\f\4\7\i\x\l\0\g\k\q\r\b\7\k\l\s\b\8\3\q\g\5\n\d\6\x\r\n\g\g\4\6\g\m\q\6\g\f\s\r\3\f\n\v\q\4\j\d\4\e\l\p\4\r\e\2\d\2\8\q\z\4\o\0\p\5\m\6\i\8\t\r\u\h\5\v\6\6\9\n\w\7\n\g\f\v\s\l\t\1\0\e\h\t\l\t\8\3\s\g\e\z\q\j\o\f\n\b\p\9\d\0\8\l\o\7\9\3\e\0\q\0\s\x\4\t\o\0\9\a\7\8\0\h\z\t\g\i\t\k\z\4\v\4\l\c\o\z\7\y\m\6\c\3\o\w\4\t\i\8\k\t\s\f\q\4\q\i\h\2\w\l\9\p\z\x\o\z\g\3\v\a\0\4\m\7\m\n\v\c\t\d\u\m\s\b\l\j\h\s\8\a\h\o\8\z\i\u\w\t\i\u\d\1\9\a\q\t\6\c\m\w\r\g\u\4\i\m\j\e\2\u\n\o\o\4\9\q\h\v\j\z\c\3\g\p\k\4\t\5\0\r\9\6\v\p\b\8\v\r\j\n\4\1\c\x\o\h\o\n\h\5\5\6\z\a\a\w\6\5\d\s\a\6\z\1\x\y\u\c\e\x\d\v\4\3\2\e\g\t\2\s\u\j\9\5\t\c\1\6\c\l\h\t\1\g\c\i\l\9\2\y\z\w\4\j\u\m\r\j\q\r\l\w\e\m\y\g\d\v\x\7\j\m\d\g\2\p\j\m\x\s\4\1\7\i\d\5\m\w\a\i\i\a\v\t\w\q\o\v\o\o\n\g\l\p\2\9\a\t\m\t\9\j\p\u\d\e\8\f\i\z\7\3\w\n\0\0\r\r\3\y\7\7\u\v\y\l\w\j\9\x\y\o\9\v\e\v\f\0\2\w\e ]] 00:07:37.425 00:07:37.425 real 0m1.213s 00:07:37.425 user 0m0.591s 00:07:37.425 sys 0m0.295s 00:07:37.425 05:47:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:37.425 ************************************ 00:07:37.425 END TEST dd_flag_nofollow_forced_aio 00:07:37.425 ************************************ 00:07:37.425 05:47:58 -- common/autotest_common.sh@10 -- # set +x 00:07:37.425 05:47:58 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:37.425 05:47:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:37.425 05:47:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.425 05:47:58 -- common/autotest_common.sh@10 -- # set +x 00:07:37.425 ************************************ 00:07:37.425 START TEST dd_flag_noatime_forced_aio 00:07:37.425 ************************************ 00:07:37.425 05:47:58 -- common/autotest_common.sh@1114 -- # noatime 00:07:37.425 05:47:58 -- dd/posix.sh@53 -- # local atime_if 00:07:37.425 05:47:58 -- dd/posix.sh@54 -- # local atime_of 00:07:37.425 05:47:58 -- dd/posix.sh@58 -- # gen_bytes 512 00:07:37.425 05:47:58 -- dd/common.sh@98 -- # xtrace_disable 00:07:37.425 05:47:58 -- common/autotest_common.sh@10 -- # set +x 00:07:37.425 05:47:58 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:37.425 05:47:58 -- dd/posix.sh@60 -- # atime_if=1734241678 00:07:37.425 05:47:58 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:37.425 05:47:58 -- dd/posix.sh@61 -- # atime_of=1734241678 00:07:37.425 05:47:58 -- dd/posix.sh@66 -- # sleep 1 00:07:38.363 05:47:59 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:38.623 [2024-12-15 05:48:00.032089] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:38.623 [2024-12-15 05:48:00.032216] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70158 ] 00:07:38.623 [2024-12-15 05:48:00.172306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.623 [2024-12-15 05:48:00.214090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.882  [2024-12-15T05:48:00.523Z] Copying: 512/512 [B] (average 500 kBps) 00:07:38.882 00:07:38.882 05:48:00 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:38.882 05:48:00 -- dd/posix.sh@69 -- # (( atime_if == 1734241678 )) 00:07:38.882 05:48:00 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:38.882 05:48:00 -- dd/posix.sh@70 -- # (( atime_of == 1734241678 )) 00:07:38.882 05:48:00 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:38.882 [2024-12-15 05:48:00.478514] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:38.882 [2024-12-15 05:48:00.478661] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70164 ] 00:07:39.141 [2024-12-15 05:48:00.616128] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.141 [2024-12-15 05:48:00.648167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.141  [2024-12-15T05:48:01.042Z] Copying: 512/512 [B] (average 500 kBps) 00:07:39.401 00:07:39.401 05:48:00 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:39.401 05:48:00 -- dd/posix.sh@73 -- # (( atime_if < 1734241680 )) 00:07:39.401 00:07:39.401 real 0m1.907s 00:07:39.401 user 0m0.450s 00:07:39.401 sys 0m0.215s 00:07:39.401 05:48:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:39.401 05:48:00 -- common/autotest_common.sh@10 -- # set +x 00:07:39.401 ************************************ 00:07:39.401 END TEST dd_flag_noatime_forced_aio 00:07:39.401 ************************************ 00:07:39.401 05:48:00 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:39.401 05:48:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:39.401 05:48:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:39.401 05:48:00 -- common/autotest_common.sh@10 -- # set +x 00:07:39.401 ************************************ 00:07:39.401 START TEST dd_flags_misc_forced_aio 00:07:39.401 ************************************ 00:07:39.401 05:48:00 -- common/autotest_common.sh@1114 -- # io 00:07:39.401 05:48:00 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:39.401 05:48:00 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:39.401 05:48:00 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:39.401 05:48:00 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:39.401 05:48:00 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:39.401 05:48:00 -- dd/common.sh@98 -- # xtrace_disable 00:07:39.401 05:48:00 -- common/autotest_common.sh@10 -- # set +x 00:07:39.401 05:48:00 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:39.401 05:48:00 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:39.401 [2024-12-15 05:48:00.968853] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:39.401 [2024-12-15 05:48:00.968978] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70196 ] 00:07:39.660 [2024-12-15 05:48:01.104919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.660 [2024-12-15 05:48:01.136656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.660  [2024-12-15T05:48:01.561Z] Copying: 512/512 [B] (average 500 kBps) 00:07:39.920 00:07:39.920 05:48:01 -- dd/posix.sh@93 -- # [[ mk1jodt4w4lz2n61nls2rbsfmw3bp3k9pb4wbzamiav7abg82pkw518v445y0su29ui8i09sr34d1jnoz5axegyy4i0lf3r0mum692yyyi52wlnd3xa173jkl56p8pj46vtv7lhtwrc7sfz3ymrza4fh3ma5n76r8l564hx6sn8jby91oa6zwyedc0cp7yneew8aj2c4ebev11v88mm6b6na0pig9lq0dumtan08ohw7sqzqccvlcvicyhp8pcl6skgfnfkp3uutfn6m9z4nnrum2qbiv7boy2bu6hedct88blc05wb0yrzv0r20lyhi0fqby5qbo2wap2gly9twbpzv11wp77ya58c4nzop0n3xxxfwnl9d726pf9q1br8ulg5b13kt34ykmwyf4ad31ca43g3uv1zfdmel659wz3fll5hrdxcaghay1i6ust275su2h8k7uvfuwf6wrn4y301cxusw7z2271nji8brt5zhmr0xy6oqd44nhehm0e94 == \m\k\1\j\o\d\t\4\w\4\l\z\2\n\6\1\n\l\s\2\r\b\s\f\m\w\3\b\p\3\k\9\p\b\4\w\b\z\a\m\i\a\v\7\a\b\g\8\2\p\k\w\5\1\8\v\4\4\5\y\0\s\u\2\9\u\i\8\i\0\9\s\r\3\4\d\1\j\n\o\z\5\a\x\e\g\y\y\4\i\0\l\f\3\r\0\m\u\m\6\9\2\y\y\y\i\5\2\w\l\n\d\3\x\a\1\7\3\j\k\l\5\6\p\8\p\j\4\6\v\t\v\7\l\h\t\w\r\c\7\s\f\z\3\y\m\r\z\a\4\f\h\3\m\a\5\n\7\6\r\8\l\5\6\4\h\x\6\s\n\8\j\b\y\9\1\o\a\6\z\w\y\e\d\c\0\c\p\7\y\n\e\e\w\8\a\j\2\c\4\e\b\e\v\1\1\v\8\8\m\m\6\b\6\n\a\0\p\i\g\9\l\q\0\d\u\m\t\a\n\0\8\o\h\w\7\s\q\z\q\c\c\v\l\c\v\i\c\y\h\p\8\p\c\l\6\s\k\g\f\n\f\k\p\3\u\u\t\f\n\6\m\9\z\4\n\n\r\u\m\2\q\b\i\v\7\b\o\y\2\b\u\6\h\e\d\c\t\8\8\b\l\c\0\5\w\b\0\y\r\z\v\0\r\2\0\l\y\h\i\0\f\q\b\y\5\q\b\o\2\w\a\p\2\g\l\y\9\t\w\b\p\z\v\1\1\w\p\7\7\y\a\5\8\c\4\n\z\o\p\0\n\3\x\x\x\f\w\n\l\9\d\7\2\6\p\f\9\q\1\b\r\8\u\l\g\5\b\1\3\k\t\3\4\y\k\m\w\y\f\4\a\d\3\1\c\a\4\3\g\3\u\v\1\z\f\d\m\e\l\6\5\9\w\z\3\f\l\l\5\h\r\d\x\c\a\g\h\a\y\1\i\6\u\s\t\2\7\5\s\u\2\h\8\k\7\u\v\f\u\w\f\6\w\r\n\4\y\3\0\1\c\x\u\s\w\7\z\2\2\7\1\n\j\i\8\b\r\t\5\z\h\m\r\0\x\y\6\o\q\d\4\4\n\h\e\h\m\0\e\9\4 ]] 00:07:39.920 05:48:01 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:39.920 05:48:01 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:39.920 [2024-12-15 05:48:01.387572] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:39.920 [2024-12-15 05:48:01.387710] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70198 ] 00:07:39.920 [2024-12-15 05:48:01.524449] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.178 [2024-12-15 05:48:01.561791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.178  [2024-12-15T05:48:01.819Z] Copying: 512/512 [B] (average 500 kBps) 00:07:40.178 00:07:40.178 05:48:01 -- dd/posix.sh@93 -- # [[ mk1jodt4w4lz2n61nls2rbsfmw3bp3k9pb4wbzamiav7abg82pkw518v445y0su29ui8i09sr34d1jnoz5axegyy4i0lf3r0mum692yyyi52wlnd3xa173jkl56p8pj46vtv7lhtwrc7sfz3ymrza4fh3ma5n76r8l564hx6sn8jby91oa6zwyedc0cp7yneew8aj2c4ebev11v88mm6b6na0pig9lq0dumtan08ohw7sqzqccvlcvicyhp8pcl6skgfnfkp3uutfn6m9z4nnrum2qbiv7boy2bu6hedct88blc05wb0yrzv0r20lyhi0fqby5qbo2wap2gly9twbpzv11wp77ya58c4nzop0n3xxxfwnl9d726pf9q1br8ulg5b13kt34ykmwyf4ad31ca43g3uv1zfdmel659wz3fll5hrdxcaghay1i6ust275su2h8k7uvfuwf6wrn4y301cxusw7z2271nji8brt5zhmr0xy6oqd44nhehm0e94 == \m\k\1\j\o\d\t\4\w\4\l\z\2\n\6\1\n\l\s\2\r\b\s\f\m\w\3\b\p\3\k\9\p\b\4\w\b\z\a\m\i\a\v\7\a\b\g\8\2\p\k\w\5\1\8\v\4\4\5\y\0\s\u\2\9\u\i\8\i\0\9\s\r\3\4\d\1\j\n\o\z\5\a\x\e\g\y\y\4\i\0\l\f\3\r\0\m\u\m\6\9\2\y\y\y\i\5\2\w\l\n\d\3\x\a\1\7\3\j\k\l\5\6\p\8\p\j\4\6\v\t\v\7\l\h\t\w\r\c\7\s\f\z\3\y\m\r\z\a\4\f\h\3\m\a\5\n\7\6\r\8\l\5\6\4\h\x\6\s\n\8\j\b\y\9\1\o\a\6\z\w\y\e\d\c\0\c\p\7\y\n\e\e\w\8\a\j\2\c\4\e\b\e\v\1\1\v\8\8\m\m\6\b\6\n\a\0\p\i\g\9\l\q\0\d\u\m\t\a\n\0\8\o\h\w\7\s\q\z\q\c\c\v\l\c\v\i\c\y\h\p\8\p\c\l\6\s\k\g\f\n\f\k\p\3\u\u\t\f\n\6\m\9\z\4\n\n\r\u\m\2\q\b\i\v\7\b\o\y\2\b\u\6\h\e\d\c\t\8\8\b\l\c\0\5\w\b\0\y\r\z\v\0\r\2\0\l\y\h\i\0\f\q\b\y\5\q\b\o\2\w\a\p\2\g\l\y\9\t\w\b\p\z\v\1\1\w\p\7\7\y\a\5\8\c\4\n\z\o\p\0\n\3\x\x\x\f\w\n\l\9\d\7\2\6\p\f\9\q\1\b\r\8\u\l\g\5\b\1\3\k\t\3\4\y\k\m\w\y\f\4\a\d\3\1\c\a\4\3\g\3\u\v\1\z\f\d\m\e\l\6\5\9\w\z\3\f\l\l\5\h\r\d\x\c\a\g\h\a\y\1\i\6\u\s\t\2\7\5\s\u\2\h\8\k\7\u\v\f\u\w\f\6\w\r\n\4\y\3\0\1\c\x\u\s\w\7\z\2\2\7\1\n\j\i\8\b\r\t\5\z\h\m\r\0\x\y\6\o\q\d\4\4\n\h\e\h\m\0\e\9\4 ]] 00:07:40.178 05:48:01 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:40.178 05:48:01 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:40.437 [2024-12-15 05:48:01.827462] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:40.437 [2024-12-15 05:48:01.827621] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70206 ] 00:07:40.437 [2024-12-15 05:48:01.964085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.437 [2024-12-15 05:48:01.997610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.437  [2024-12-15T05:48:02.337Z] Copying: 512/512 [B] (average 166 kBps) 00:07:40.696 00:07:40.697 05:48:02 -- dd/posix.sh@93 -- # [[ mk1jodt4w4lz2n61nls2rbsfmw3bp3k9pb4wbzamiav7abg82pkw518v445y0su29ui8i09sr34d1jnoz5axegyy4i0lf3r0mum692yyyi52wlnd3xa173jkl56p8pj46vtv7lhtwrc7sfz3ymrza4fh3ma5n76r8l564hx6sn8jby91oa6zwyedc0cp7yneew8aj2c4ebev11v88mm6b6na0pig9lq0dumtan08ohw7sqzqccvlcvicyhp8pcl6skgfnfkp3uutfn6m9z4nnrum2qbiv7boy2bu6hedct88blc05wb0yrzv0r20lyhi0fqby5qbo2wap2gly9twbpzv11wp77ya58c4nzop0n3xxxfwnl9d726pf9q1br8ulg5b13kt34ykmwyf4ad31ca43g3uv1zfdmel659wz3fll5hrdxcaghay1i6ust275su2h8k7uvfuwf6wrn4y301cxusw7z2271nji8brt5zhmr0xy6oqd44nhehm0e94 == \m\k\1\j\o\d\t\4\w\4\l\z\2\n\6\1\n\l\s\2\r\b\s\f\m\w\3\b\p\3\k\9\p\b\4\w\b\z\a\m\i\a\v\7\a\b\g\8\2\p\k\w\5\1\8\v\4\4\5\y\0\s\u\2\9\u\i\8\i\0\9\s\r\3\4\d\1\j\n\o\z\5\a\x\e\g\y\y\4\i\0\l\f\3\r\0\m\u\m\6\9\2\y\y\y\i\5\2\w\l\n\d\3\x\a\1\7\3\j\k\l\5\6\p\8\p\j\4\6\v\t\v\7\l\h\t\w\r\c\7\s\f\z\3\y\m\r\z\a\4\f\h\3\m\a\5\n\7\6\r\8\l\5\6\4\h\x\6\s\n\8\j\b\y\9\1\o\a\6\z\w\y\e\d\c\0\c\p\7\y\n\e\e\w\8\a\j\2\c\4\e\b\e\v\1\1\v\8\8\m\m\6\b\6\n\a\0\p\i\g\9\l\q\0\d\u\m\t\a\n\0\8\o\h\w\7\s\q\z\q\c\c\v\l\c\v\i\c\y\h\p\8\p\c\l\6\s\k\g\f\n\f\k\p\3\u\u\t\f\n\6\m\9\z\4\n\n\r\u\m\2\q\b\i\v\7\b\o\y\2\b\u\6\h\e\d\c\t\8\8\b\l\c\0\5\w\b\0\y\r\z\v\0\r\2\0\l\y\h\i\0\f\q\b\y\5\q\b\o\2\w\a\p\2\g\l\y\9\t\w\b\p\z\v\1\1\w\p\7\7\y\a\5\8\c\4\n\z\o\p\0\n\3\x\x\x\f\w\n\l\9\d\7\2\6\p\f\9\q\1\b\r\8\u\l\g\5\b\1\3\k\t\3\4\y\k\m\w\y\f\4\a\d\3\1\c\a\4\3\g\3\u\v\1\z\f\d\m\e\l\6\5\9\w\z\3\f\l\l\5\h\r\d\x\c\a\g\h\a\y\1\i\6\u\s\t\2\7\5\s\u\2\h\8\k\7\u\v\f\u\w\f\6\w\r\n\4\y\3\0\1\c\x\u\s\w\7\z\2\2\7\1\n\j\i\8\b\r\t\5\z\h\m\r\0\x\y\6\o\q\d\4\4\n\h\e\h\m\0\e\9\4 ]] 00:07:40.697 05:48:02 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:40.697 05:48:02 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:40.697 [2024-12-15 05:48:02.251697] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:40.697 [2024-12-15 05:48:02.251833] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70213 ] 00:07:40.975 [2024-12-15 05:48:02.389689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.975 [2024-12-15 05:48:02.423563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.975  [2024-12-15T05:48:02.874Z] Copying: 512/512 [B] (average 250 kBps) 00:07:41.233 00:07:41.234 05:48:02 -- dd/posix.sh@93 -- # [[ mk1jodt4w4lz2n61nls2rbsfmw3bp3k9pb4wbzamiav7abg82pkw518v445y0su29ui8i09sr34d1jnoz5axegyy4i0lf3r0mum692yyyi52wlnd3xa173jkl56p8pj46vtv7lhtwrc7sfz3ymrza4fh3ma5n76r8l564hx6sn8jby91oa6zwyedc0cp7yneew8aj2c4ebev11v88mm6b6na0pig9lq0dumtan08ohw7sqzqccvlcvicyhp8pcl6skgfnfkp3uutfn6m9z4nnrum2qbiv7boy2bu6hedct88blc05wb0yrzv0r20lyhi0fqby5qbo2wap2gly9twbpzv11wp77ya58c4nzop0n3xxxfwnl9d726pf9q1br8ulg5b13kt34ykmwyf4ad31ca43g3uv1zfdmel659wz3fll5hrdxcaghay1i6ust275su2h8k7uvfuwf6wrn4y301cxusw7z2271nji8brt5zhmr0xy6oqd44nhehm0e94 == \m\k\1\j\o\d\t\4\w\4\l\z\2\n\6\1\n\l\s\2\r\b\s\f\m\w\3\b\p\3\k\9\p\b\4\w\b\z\a\m\i\a\v\7\a\b\g\8\2\p\k\w\5\1\8\v\4\4\5\y\0\s\u\2\9\u\i\8\i\0\9\s\r\3\4\d\1\j\n\o\z\5\a\x\e\g\y\y\4\i\0\l\f\3\r\0\m\u\m\6\9\2\y\y\y\i\5\2\w\l\n\d\3\x\a\1\7\3\j\k\l\5\6\p\8\p\j\4\6\v\t\v\7\l\h\t\w\r\c\7\s\f\z\3\y\m\r\z\a\4\f\h\3\m\a\5\n\7\6\r\8\l\5\6\4\h\x\6\s\n\8\j\b\y\9\1\o\a\6\z\w\y\e\d\c\0\c\p\7\y\n\e\e\w\8\a\j\2\c\4\e\b\e\v\1\1\v\8\8\m\m\6\b\6\n\a\0\p\i\g\9\l\q\0\d\u\m\t\a\n\0\8\o\h\w\7\s\q\z\q\c\c\v\l\c\v\i\c\y\h\p\8\p\c\l\6\s\k\g\f\n\f\k\p\3\u\u\t\f\n\6\m\9\z\4\n\n\r\u\m\2\q\b\i\v\7\b\o\y\2\b\u\6\h\e\d\c\t\8\8\b\l\c\0\5\w\b\0\y\r\z\v\0\r\2\0\l\y\h\i\0\f\q\b\y\5\q\b\o\2\w\a\p\2\g\l\y\9\t\w\b\p\z\v\1\1\w\p\7\7\y\a\5\8\c\4\n\z\o\p\0\n\3\x\x\x\f\w\n\l\9\d\7\2\6\p\f\9\q\1\b\r\8\u\l\g\5\b\1\3\k\t\3\4\y\k\m\w\y\f\4\a\d\3\1\c\a\4\3\g\3\u\v\1\z\f\d\m\e\l\6\5\9\w\z\3\f\l\l\5\h\r\d\x\c\a\g\h\a\y\1\i\6\u\s\t\2\7\5\s\u\2\h\8\k\7\u\v\f\u\w\f\6\w\r\n\4\y\3\0\1\c\x\u\s\w\7\z\2\2\7\1\n\j\i\8\b\r\t\5\z\h\m\r\0\x\y\6\o\q\d\4\4\n\h\e\h\m\0\e\9\4 ]] 00:07:41.234 05:48:02 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:41.234 05:48:02 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:41.234 05:48:02 -- dd/common.sh@98 -- # xtrace_disable 00:07:41.234 05:48:02 -- common/autotest_common.sh@10 -- # set +x 00:07:41.234 05:48:02 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:41.234 05:48:02 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:41.234 [2024-12-15 05:48:02.678982] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:41.234 [2024-12-15 05:48:02.679114] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70221 ] 00:07:41.234 [2024-12-15 05:48:02.817606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.234 [2024-12-15 05:48:02.852176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.492  [2024-12-15T05:48:03.133Z] Copying: 512/512 [B] (average 500 kBps) 00:07:41.492 00:07:41.492 05:48:03 -- dd/posix.sh@93 -- # [[ 4xqy6gdsbuja2an5ijonxbddrnptb4dz1y79zkiuw1loshedudyxwoh70yh2lcwc13h9tyv3t03ilky2xtduf61ic5kn6sw4bizweo83nwr1zivsswbj9ig97459npjn3zgjeu4q5z3181nj6ntbsxnbrgc3cgkmrtdzn029e234rjufaf6fu1dykw30hc322fi6k035scl7dng5167dfx6cpq4hd7f04cnbr725ksvb7zl489vyo83sz545yievk399028dax42yrzyoranmg8kluvl3p0xni6902654av1yoac7u4iy2kdqnqmbwsd8sg888jnjc52mubug1l6oxz0bol4rrbz5aok7ojnwk4q07svzjq24e8wg4b8c1mpy4hj6erzptytpjigag5yr8vhion2mb7ux7wxf0j4xkdx2fkc84jrlem6unwchix2t3v1hqnm2jqkb2lmablgwalbo54fbd7wqgg2usndw09je1os2dcdfneiswdnoe9k == \4\x\q\y\6\g\d\s\b\u\j\a\2\a\n\5\i\j\o\n\x\b\d\d\r\n\p\t\b\4\d\z\1\y\7\9\z\k\i\u\w\1\l\o\s\h\e\d\u\d\y\x\w\o\h\7\0\y\h\2\l\c\w\c\1\3\h\9\t\y\v\3\t\0\3\i\l\k\y\2\x\t\d\u\f\6\1\i\c\5\k\n\6\s\w\4\b\i\z\w\e\o\8\3\n\w\r\1\z\i\v\s\s\w\b\j\9\i\g\9\7\4\5\9\n\p\j\n\3\z\g\j\e\u\4\q\5\z\3\1\8\1\n\j\6\n\t\b\s\x\n\b\r\g\c\3\c\g\k\m\r\t\d\z\n\0\2\9\e\2\3\4\r\j\u\f\a\f\6\f\u\1\d\y\k\w\3\0\h\c\3\2\2\f\i\6\k\0\3\5\s\c\l\7\d\n\g\5\1\6\7\d\f\x\6\c\p\q\4\h\d\7\f\0\4\c\n\b\r\7\2\5\k\s\v\b\7\z\l\4\8\9\v\y\o\8\3\s\z\5\4\5\y\i\e\v\k\3\9\9\0\2\8\d\a\x\4\2\y\r\z\y\o\r\a\n\m\g\8\k\l\u\v\l\3\p\0\x\n\i\6\9\0\2\6\5\4\a\v\1\y\o\a\c\7\u\4\i\y\2\k\d\q\n\q\m\b\w\s\d\8\s\g\8\8\8\j\n\j\c\5\2\m\u\b\u\g\1\l\6\o\x\z\0\b\o\l\4\r\r\b\z\5\a\o\k\7\o\j\n\w\k\4\q\0\7\s\v\z\j\q\2\4\e\8\w\g\4\b\8\c\1\m\p\y\4\h\j\6\e\r\z\p\t\y\t\p\j\i\g\a\g\5\y\r\8\v\h\i\o\n\2\m\b\7\u\x\7\w\x\f\0\j\4\x\k\d\x\2\f\k\c\8\4\j\r\l\e\m\6\u\n\w\c\h\i\x\2\t\3\v\1\h\q\n\m\2\j\q\k\b\2\l\m\a\b\l\g\w\a\l\b\o\5\4\f\b\d\7\w\q\g\g\2\u\s\n\d\w\0\9\j\e\1\o\s\2\d\c\d\f\n\e\i\s\w\d\n\o\e\9\k ]] 00:07:41.492 05:48:03 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:41.492 05:48:03 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:41.492 [2024-12-15 05:48:03.105331] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:41.492 [2024-12-15 05:48:03.105461] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70228 ] 00:07:41.751 [2024-12-15 05:48:03.244112] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.751 [2024-12-15 05:48:03.276711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.751  [2024-12-15T05:48:03.651Z] Copying: 512/512 [B] (average 500 kBps) 00:07:42.010 00:07:42.011 05:48:03 -- dd/posix.sh@93 -- # [[ 4xqy6gdsbuja2an5ijonxbddrnptb4dz1y79zkiuw1loshedudyxwoh70yh2lcwc13h9tyv3t03ilky2xtduf61ic5kn6sw4bizweo83nwr1zivsswbj9ig97459npjn3zgjeu4q5z3181nj6ntbsxnbrgc3cgkmrtdzn029e234rjufaf6fu1dykw30hc322fi6k035scl7dng5167dfx6cpq4hd7f04cnbr725ksvb7zl489vyo83sz545yievk399028dax42yrzyoranmg8kluvl3p0xni6902654av1yoac7u4iy2kdqnqmbwsd8sg888jnjc52mubug1l6oxz0bol4rrbz5aok7ojnwk4q07svzjq24e8wg4b8c1mpy4hj6erzptytpjigag5yr8vhion2mb7ux7wxf0j4xkdx2fkc84jrlem6unwchix2t3v1hqnm2jqkb2lmablgwalbo54fbd7wqgg2usndw09je1os2dcdfneiswdnoe9k == \4\x\q\y\6\g\d\s\b\u\j\a\2\a\n\5\i\j\o\n\x\b\d\d\r\n\p\t\b\4\d\z\1\y\7\9\z\k\i\u\w\1\l\o\s\h\e\d\u\d\y\x\w\o\h\7\0\y\h\2\l\c\w\c\1\3\h\9\t\y\v\3\t\0\3\i\l\k\y\2\x\t\d\u\f\6\1\i\c\5\k\n\6\s\w\4\b\i\z\w\e\o\8\3\n\w\r\1\z\i\v\s\s\w\b\j\9\i\g\9\7\4\5\9\n\p\j\n\3\z\g\j\e\u\4\q\5\z\3\1\8\1\n\j\6\n\t\b\s\x\n\b\r\g\c\3\c\g\k\m\r\t\d\z\n\0\2\9\e\2\3\4\r\j\u\f\a\f\6\f\u\1\d\y\k\w\3\0\h\c\3\2\2\f\i\6\k\0\3\5\s\c\l\7\d\n\g\5\1\6\7\d\f\x\6\c\p\q\4\h\d\7\f\0\4\c\n\b\r\7\2\5\k\s\v\b\7\z\l\4\8\9\v\y\o\8\3\s\z\5\4\5\y\i\e\v\k\3\9\9\0\2\8\d\a\x\4\2\y\r\z\y\o\r\a\n\m\g\8\k\l\u\v\l\3\p\0\x\n\i\6\9\0\2\6\5\4\a\v\1\y\o\a\c\7\u\4\i\y\2\k\d\q\n\q\m\b\w\s\d\8\s\g\8\8\8\j\n\j\c\5\2\m\u\b\u\g\1\l\6\o\x\z\0\b\o\l\4\r\r\b\z\5\a\o\k\7\o\j\n\w\k\4\q\0\7\s\v\z\j\q\2\4\e\8\w\g\4\b\8\c\1\m\p\y\4\h\j\6\e\r\z\p\t\y\t\p\j\i\g\a\g\5\y\r\8\v\h\i\o\n\2\m\b\7\u\x\7\w\x\f\0\j\4\x\k\d\x\2\f\k\c\8\4\j\r\l\e\m\6\u\n\w\c\h\i\x\2\t\3\v\1\h\q\n\m\2\j\q\k\b\2\l\m\a\b\l\g\w\a\l\b\o\5\4\f\b\d\7\w\q\g\g\2\u\s\n\d\w\0\9\j\e\1\o\s\2\d\c\d\f\n\e\i\s\w\d\n\o\e\9\k ]] 00:07:42.011 05:48:03 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:42.011 05:48:03 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:42.011 [2024-12-15 05:48:03.522354] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:42.011 [2024-12-15 05:48:03.522455] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70230 ] 00:07:42.270 [2024-12-15 05:48:03.659186] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.270 [2024-12-15 05:48:03.693103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.270  [2024-12-15T05:48:04.170Z] Copying: 512/512 [B] (average 166 kBps) 00:07:42.529 00:07:42.529 05:48:03 -- dd/posix.sh@93 -- # [[ 4xqy6gdsbuja2an5ijonxbddrnptb4dz1y79zkiuw1loshedudyxwoh70yh2lcwc13h9tyv3t03ilky2xtduf61ic5kn6sw4bizweo83nwr1zivsswbj9ig97459npjn3zgjeu4q5z3181nj6ntbsxnbrgc3cgkmrtdzn029e234rjufaf6fu1dykw30hc322fi6k035scl7dng5167dfx6cpq4hd7f04cnbr725ksvb7zl489vyo83sz545yievk399028dax42yrzyoranmg8kluvl3p0xni6902654av1yoac7u4iy2kdqnqmbwsd8sg888jnjc52mubug1l6oxz0bol4rrbz5aok7ojnwk4q07svzjq24e8wg4b8c1mpy4hj6erzptytpjigag5yr8vhion2mb7ux7wxf0j4xkdx2fkc84jrlem6unwchix2t3v1hqnm2jqkb2lmablgwalbo54fbd7wqgg2usndw09je1os2dcdfneiswdnoe9k == \4\x\q\y\6\g\d\s\b\u\j\a\2\a\n\5\i\j\o\n\x\b\d\d\r\n\p\t\b\4\d\z\1\y\7\9\z\k\i\u\w\1\l\o\s\h\e\d\u\d\y\x\w\o\h\7\0\y\h\2\l\c\w\c\1\3\h\9\t\y\v\3\t\0\3\i\l\k\y\2\x\t\d\u\f\6\1\i\c\5\k\n\6\s\w\4\b\i\z\w\e\o\8\3\n\w\r\1\z\i\v\s\s\w\b\j\9\i\g\9\7\4\5\9\n\p\j\n\3\z\g\j\e\u\4\q\5\z\3\1\8\1\n\j\6\n\t\b\s\x\n\b\r\g\c\3\c\g\k\m\r\t\d\z\n\0\2\9\e\2\3\4\r\j\u\f\a\f\6\f\u\1\d\y\k\w\3\0\h\c\3\2\2\f\i\6\k\0\3\5\s\c\l\7\d\n\g\5\1\6\7\d\f\x\6\c\p\q\4\h\d\7\f\0\4\c\n\b\r\7\2\5\k\s\v\b\7\z\l\4\8\9\v\y\o\8\3\s\z\5\4\5\y\i\e\v\k\3\9\9\0\2\8\d\a\x\4\2\y\r\z\y\o\r\a\n\m\g\8\k\l\u\v\l\3\p\0\x\n\i\6\9\0\2\6\5\4\a\v\1\y\o\a\c\7\u\4\i\y\2\k\d\q\n\q\m\b\w\s\d\8\s\g\8\8\8\j\n\j\c\5\2\m\u\b\u\g\1\l\6\o\x\z\0\b\o\l\4\r\r\b\z\5\a\o\k\7\o\j\n\w\k\4\q\0\7\s\v\z\j\q\2\4\e\8\w\g\4\b\8\c\1\m\p\y\4\h\j\6\e\r\z\p\t\y\t\p\j\i\g\a\g\5\y\r\8\v\h\i\o\n\2\m\b\7\u\x\7\w\x\f\0\j\4\x\k\d\x\2\f\k\c\8\4\j\r\l\e\m\6\u\n\w\c\h\i\x\2\t\3\v\1\h\q\n\m\2\j\q\k\b\2\l\m\a\b\l\g\w\a\l\b\o\5\4\f\b\d\7\w\q\g\g\2\u\s\n\d\w\0\9\j\e\1\o\s\2\d\c\d\f\n\e\i\s\w\d\n\o\e\9\k ]] 00:07:42.529 05:48:03 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:42.529 05:48:03 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:42.529 [2024-12-15 05:48:03.966271] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:42.529 [2024-12-15 05:48:03.966418] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70238 ] 00:07:42.529 [2024-12-15 05:48:04.103370] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.529 [2024-12-15 05:48:04.138406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.788  [2024-12-15T05:48:04.429Z] Copying: 512/512 [B] (average 500 kBps) 00:07:42.788 00:07:42.788 05:48:04 -- dd/posix.sh@93 -- # [[ 4xqy6gdsbuja2an5ijonxbddrnptb4dz1y79zkiuw1loshedudyxwoh70yh2lcwc13h9tyv3t03ilky2xtduf61ic5kn6sw4bizweo83nwr1zivsswbj9ig97459npjn3zgjeu4q5z3181nj6ntbsxnbrgc3cgkmrtdzn029e234rjufaf6fu1dykw30hc322fi6k035scl7dng5167dfx6cpq4hd7f04cnbr725ksvb7zl489vyo83sz545yievk399028dax42yrzyoranmg8kluvl3p0xni6902654av1yoac7u4iy2kdqnqmbwsd8sg888jnjc52mubug1l6oxz0bol4rrbz5aok7ojnwk4q07svzjq24e8wg4b8c1mpy4hj6erzptytpjigag5yr8vhion2mb7ux7wxf0j4xkdx2fkc84jrlem6unwchix2t3v1hqnm2jqkb2lmablgwalbo54fbd7wqgg2usndw09je1os2dcdfneiswdnoe9k == \4\x\q\y\6\g\d\s\b\u\j\a\2\a\n\5\i\j\o\n\x\b\d\d\r\n\p\t\b\4\d\z\1\y\7\9\z\k\i\u\w\1\l\o\s\h\e\d\u\d\y\x\w\o\h\7\0\y\h\2\l\c\w\c\1\3\h\9\t\y\v\3\t\0\3\i\l\k\y\2\x\t\d\u\f\6\1\i\c\5\k\n\6\s\w\4\b\i\z\w\e\o\8\3\n\w\r\1\z\i\v\s\s\w\b\j\9\i\g\9\7\4\5\9\n\p\j\n\3\z\g\j\e\u\4\q\5\z\3\1\8\1\n\j\6\n\t\b\s\x\n\b\r\g\c\3\c\g\k\m\r\t\d\z\n\0\2\9\e\2\3\4\r\j\u\f\a\f\6\f\u\1\d\y\k\w\3\0\h\c\3\2\2\f\i\6\k\0\3\5\s\c\l\7\d\n\g\5\1\6\7\d\f\x\6\c\p\q\4\h\d\7\f\0\4\c\n\b\r\7\2\5\k\s\v\b\7\z\l\4\8\9\v\y\o\8\3\s\z\5\4\5\y\i\e\v\k\3\9\9\0\2\8\d\a\x\4\2\y\r\z\y\o\r\a\n\m\g\8\k\l\u\v\l\3\p\0\x\n\i\6\9\0\2\6\5\4\a\v\1\y\o\a\c\7\u\4\i\y\2\k\d\q\n\q\m\b\w\s\d\8\s\g\8\8\8\j\n\j\c\5\2\m\u\b\u\g\1\l\6\o\x\z\0\b\o\l\4\r\r\b\z\5\a\o\k\7\o\j\n\w\k\4\q\0\7\s\v\z\j\q\2\4\e\8\w\g\4\b\8\c\1\m\p\y\4\h\j\6\e\r\z\p\t\y\t\p\j\i\g\a\g\5\y\r\8\v\h\i\o\n\2\m\b\7\u\x\7\w\x\f\0\j\4\x\k\d\x\2\f\k\c\8\4\j\r\l\e\m\6\u\n\w\c\h\i\x\2\t\3\v\1\h\q\n\m\2\j\q\k\b\2\l\m\a\b\l\g\w\a\l\b\o\5\4\f\b\d\7\w\q\g\g\2\u\s\n\d\w\0\9\j\e\1\o\s\2\d\c\d\f\n\e\i\s\w\d\n\o\e\9\k ]] 00:07:42.789 00:07:42.789 real 0m3.428s 00:07:42.789 user 0m1.673s 00:07:42.789 sys 0m0.765s 00:07:42.789 05:48:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:42.789 05:48:04 -- common/autotest_common.sh@10 -- # set +x 00:07:42.789 ************************************ 00:07:42.789 END TEST dd_flags_misc_forced_aio 00:07:42.789 ************************************ 00:07:42.789 05:48:04 -- dd/posix.sh@1 -- # cleanup 00:07:42.789 05:48:04 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:42.789 05:48:04 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:42.789 00:07:42.789 real 0m15.939s 00:07:42.789 user 0m6.671s 00:07:42.789 sys 0m3.441s 00:07:42.789 05:48:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:42.789 05:48:04 -- common/autotest_common.sh@10 -- # set +x 00:07:42.789 ************************************ 00:07:42.789 END TEST spdk_dd_posix 00:07:42.789 ************************************ 00:07:43.048 05:48:04 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:43.048 05:48:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:43.048 05:48:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.048 05:48:04 -- common/autotest_common.sh@10 -- # set +x 00:07:43.048 ************************************ 00:07:43.048 START TEST spdk_dd_malloc 00:07:43.048 ************************************ 00:07:43.048 05:48:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:43.048 * Looking for test storage... 00:07:43.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:43.048 05:48:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:43.048 05:48:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:43.048 05:48:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:43.048 05:48:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:43.048 05:48:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:43.048 05:48:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:43.048 05:48:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:43.048 05:48:04 -- scripts/common.sh@335 -- # IFS=.-: 00:07:43.048 05:48:04 -- scripts/common.sh@335 -- # read -ra ver1 00:07:43.048 05:48:04 -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.048 05:48:04 -- scripts/common.sh@336 -- # read -ra ver2 00:07:43.048 05:48:04 -- scripts/common.sh@337 -- # local 'op=<' 00:07:43.048 05:48:04 -- scripts/common.sh@339 -- # ver1_l=2 00:07:43.048 05:48:04 -- scripts/common.sh@340 -- # ver2_l=1 00:07:43.048 05:48:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:43.048 05:48:04 -- scripts/common.sh@343 -- # case "$op" in 00:07:43.048 05:48:04 -- scripts/common.sh@344 -- # : 1 00:07:43.048 05:48:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:43.048 05:48:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.048 05:48:04 -- scripts/common.sh@364 -- # decimal 1 00:07:43.048 05:48:04 -- scripts/common.sh@352 -- # local d=1 00:07:43.048 05:48:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.048 05:48:04 -- scripts/common.sh@354 -- # echo 1 00:07:43.048 05:48:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:43.048 05:48:04 -- scripts/common.sh@365 -- # decimal 2 00:07:43.048 05:48:04 -- scripts/common.sh@352 -- # local d=2 00:07:43.048 05:48:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.048 05:48:04 -- scripts/common.sh@354 -- # echo 2 00:07:43.048 05:48:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:43.048 05:48:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:43.048 05:48:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:43.048 05:48:04 -- scripts/common.sh@367 -- # return 0 00:07:43.048 05:48:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.048 05:48:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:43.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.048 --rc genhtml_branch_coverage=1 00:07:43.048 --rc genhtml_function_coverage=1 00:07:43.048 --rc genhtml_legend=1 00:07:43.048 --rc geninfo_all_blocks=1 00:07:43.048 --rc geninfo_unexecuted_blocks=1 00:07:43.048 00:07:43.048 ' 00:07:43.048 05:48:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:43.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.048 --rc genhtml_branch_coverage=1 00:07:43.048 --rc genhtml_function_coverage=1 00:07:43.048 --rc genhtml_legend=1 00:07:43.048 --rc geninfo_all_blocks=1 00:07:43.048 --rc geninfo_unexecuted_blocks=1 00:07:43.048 00:07:43.048 ' 00:07:43.048 05:48:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:43.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.048 --rc genhtml_branch_coverage=1 00:07:43.048 --rc genhtml_function_coverage=1 00:07:43.048 --rc genhtml_legend=1 00:07:43.048 --rc geninfo_all_blocks=1 00:07:43.048 --rc geninfo_unexecuted_blocks=1 00:07:43.048 00:07:43.048 ' 00:07:43.048 05:48:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:43.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.048 --rc genhtml_branch_coverage=1 00:07:43.048 --rc genhtml_function_coverage=1 00:07:43.048 --rc genhtml_legend=1 00:07:43.048 --rc geninfo_all_blocks=1 00:07:43.048 --rc geninfo_unexecuted_blocks=1 00:07:43.048 00:07:43.048 ' 00:07:43.048 05:48:04 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:43.048 05:48:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.048 05:48:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.048 05:48:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.048 05:48:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.048 05:48:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.048 05:48:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.048 05:48:04 -- paths/export.sh@5 -- # export PATH 00:07:43.048 05:48:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.048 05:48:04 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:43.048 05:48:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:43.048 05:48:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.048 05:48:04 -- common/autotest_common.sh@10 -- # set +x 00:07:43.048 ************************************ 00:07:43.048 START TEST dd_malloc_copy 00:07:43.048 ************************************ 00:07:43.048 05:48:04 -- common/autotest_common.sh@1114 -- # malloc_copy 00:07:43.048 05:48:04 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:43.048 05:48:04 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:43.048 05:48:04 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:43.048 05:48:04 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:43.048 05:48:04 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:43.048 05:48:04 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:43.048 05:48:04 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:43.048 05:48:04 -- dd/malloc.sh@28 -- # gen_conf 00:07:43.048 05:48:04 -- dd/common.sh@31 -- # xtrace_disable 00:07:43.048 05:48:04 -- common/autotest_common.sh@10 -- # set +x 00:07:43.307 [2024-12-15 05:48:04.705719] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:43.308 [2024-12-15 05:48:04.706101] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70319 ] 00:07:43.308 { 00:07:43.308 "subsystems": [ 00:07:43.308 { 00:07:43.308 "subsystem": "bdev", 00:07:43.308 "config": [ 00:07:43.308 { 00:07:43.308 "params": { 00:07:43.308 "block_size": 512, 00:07:43.308 "num_blocks": 1048576, 00:07:43.308 "name": "malloc0" 00:07:43.308 }, 00:07:43.308 "method": "bdev_malloc_create" 00:07:43.308 }, 00:07:43.308 { 00:07:43.308 "params": { 00:07:43.308 "block_size": 512, 00:07:43.308 "num_blocks": 1048576, 00:07:43.308 "name": "malloc1" 00:07:43.308 }, 00:07:43.308 "method": "bdev_malloc_create" 00:07:43.308 }, 00:07:43.308 { 00:07:43.308 "method": "bdev_wait_for_examine" 00:07:43.308 } 00:07:43.308 ] 00:07:43.308 } 00:07:43.308 ] 00:07:43.308 } 00:07:43.308 [2024-12-15 05:48:04.844264] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.308 [2024-12-15 05:48:04.879861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.686  [2024-12-15T05:48:07.264Z] Copying: 245/512 [MB] (245 MBps) [2024-12-15T05:48:07.264Z] Copying: 476/512 [MB] (230 MBps) [2024-12-15T05:48:07.832Z] Copying: 512/512 [MB] (average 237 MBps) 00:07:46.191 00:07:46.191 05:48:07 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:46.191 05:48:07 -- dd/malloc.sh@33 -- # gen_conf 00:07:46.191 05:48:07 -- dd/common.sh@31 -- # xtrace_disable 00:07:46.191 05:48:07 -- common/autotest_common.sh@10 -- # set +x 00:07:46.191 [2024-12-15 05:48:07.636909] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:46.191 [2024-12-15 05:48:07.637011] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70355 ] 00:07:46.191 { 00:07:46.191 "subsystems": [ 00:07:46.191 { 00:07:46.191 "subsystem": "bdev", 00:07:46.191 "config": [ 00:07:46.191 { 00:07:46.191 "params": { 00:07:46.191 "block_size": 512, 00:07:46.191 "num_blocks": 1048576, 00:07:46.191 "name": "malloc0" 00:07:46.191 }, 00:07:46.191 "method": "bdev_malloc_create" 00:07:46.191 }, 00:07:46.191 { 00:07:46.191 "params": { 00:07:46.191 "block_size": 512, 00:07:46.191 "num_blocks": 1048576, 00:07:46.191 "name": "malloc1" 00:07:46.191 }, 00:07:46.191 "method": "bdev_malloc_create" 00:07:46.191 }, 00:07:46.191 { 00:07:46.191 "method": "bdev_wait_for_examine" 00:07:46.191 } 00:07:46.191 ] 00:07:46.191 } 00:07:46.191 ] 00:07:46.191 } 00:07:46.191 [2024-12-15 05:48:07.772150] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.191 [2024-12-15 05:48:07.813444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.570  [2024-12-15T05:48:10.148Z] Copying: 222/512 [MB] (222 MBps) [2024-12-15T05:48:10.407Z] Copying: 449/512 [MB] (227 MBps) [2024-12-15T05:48:10.665Z] Copying: 512/512 [MB] (average 226 MBps) 00:07:49.024 00:07:49.024 00:07:49.024 real 0m5.951s 00:07:49.024 user 0m5.290s 00:07:49.024 sys 0m0.515s 00:07:49.024 05:48:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.024 ************************************ 00:07:49.024 END TEST dd_malloc_copy 00:07:49.024 ************************************ 00:07:49.024 05:48:10 -- common/autotest_common.sh@10 -- # set +x 00:07:49.024 ************************************ 00:07:49.024 END TEST spdk_dd_malloc 00:07:49.024 ************************************ 00:07:49.024 00:07:49.024 real 0m6.195s 00:07:49.024 user 0m5.428s 00:07:49.024 sys 0m0.624s 00:07:49.024 05:48:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.024 05:48:10 -- common/autotest_common.sh@10 -- # set +x 00:07:49.283 05:48:10 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:07:49.283 05:48:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:49.283 05:48:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.283 05:48:10 -- common/autotest_common.sh@10 -- # set +x 00:07:49.283 ************************************ 00:07:49.283 START TEST spdk_dd_bdev_to_bdev 00:07:49.283 ************************************ 00:07:49.283 05:48:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:07:49.283 * Looking for test storage... 00:07:49.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:49.283 05:48:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:49.283 05:48:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:49.283 05:48:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:49.283 05:48:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:49.283 05:48:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:49.283 05:48:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:49.283 05:48:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:49.283 05:48:10 -- scripts/common.sh@335 -- # IFS=.-: 00:07:49.283 05:48:10 -- scripts/common.sh@335 -- # read -ra ver1 00:07:49.283 05:48:10 -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.283 05:48:10 -- scripts/common.sh@336 -- # read -ra ver2 00:07:49.283 05:48:10 -- scripts/common.sh@337 -- # local 'op=<' 00:07:49.283 05:48:10 -- scripts/common.sh@339 -- # ver1_l=2 00:07:49.283 05:48:10 -- scripts/common.sh@340 -- # ver2_l=1 00:07:49.283 05:48:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:49.283 05:48:10 -- scripts/common.sh@343 -- # case "$op" in 00:07:49.283 05:48:10 -- scripts/common.sh@344 -- # : 1 00:07:49.283 05:48:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:49.283 05:48:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.283 05:48:10 -- scripts/common.sh@364 -- # decimal 1 00:07:49.283 05:48:10 -- scripts/common.sh@352 -- # local d=1 00:07:49.283 05:48:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.283 05:48:10 -- scripts/common.sh@354 -- # echo 1 00:07:49.283 05:48:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:49.283 05:48:10 -- scripts/common.sh@365 -- # decimal 2 00:07:49.283 05:48:10 -- scripts/common.sh@352 -- # local d=2 00:07:49.283 05:48:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.283 05:48:10 -- scripts/common.sh@354 -- # echo 2 00:07:49.283 05:48:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:49.283 05:48:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:49.283 05:48:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:49.283 05:48:10 -- scripts/common.sh@367 -- # return 0 00:07:49.283 05:48:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.283 05:48:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:49.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.283 --rc genhtml_branch_coverage=1 00:07:49.283 --rc genhtml_function_coverage=1 00:07:49.283 --rc genhtml_legend=1 00:07:49.283 --rc geninfo_all_blocks=1 00:07:49.283 --rc geninfo_unexecuted_blocks=1 00:07:49.283 00:07:49.283 ' 00:07:49.283 05:48:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:49.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.283 --rc genhtml_branch_coverage=1 00:07:49.283 --rc genhtml_function_coverage=1 00:07:49.283 --rc genhtml_legend=1 00:07:49.283 --rc geninfo_all_blocks=1 00:07:49.283 --rc geninfo_unexecuted_blocks=1 00:07:49.283 00:07:49.283 ' 00:07:49.283 05:48:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:49.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.283 --rc genhtml_branch_coverage=1 00:07:49.283 --rc genhtml_function_coverage=1 00:07:49.283 --rc genhtml_legend=1 00:07:49.283 --rc geninfo_all_blocks=1 00:07:49.283 --rc geninfo_unexecuted_blocks=1 00:07:49.283 00:07:49.283 ' 00:07:49.283 05:48:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:49.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.283 --rc genhtml_branch_coverage=1 00:07:49.283 --rc genhtml_function_coverage=1 00:07:49.283 --rc genhtml_legend=1 00:07:49.283 --rc geninfo_all_blocks=1 00:07:49.283 --rc geninfo_unexecuted_blocks=1 00:07:49.283 00:07:49.283 ' 00:07:49.283 05:48:10 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:49.283 05:48:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.283 05:48:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.283 05:48:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.283 05:48:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.283 05:48:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.283 05:48:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.283 05:48:10 -- paths/export.sh@5 -- # export PATH 00:07:49.283 05:48:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.283 05:48:10 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:49.283 05:48:10 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:49.283 05:48:10 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:49.283 05:48:10 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:49.283 05:48:10 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:49.283 05:48:10 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:49.283 05:48:10 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:06.0 00:07:49.283 05:48:10 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:49.283 05:48:10 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:49.283 05:48:10 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:07.0 00:07:49.283 05:48:10 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:07:49.283 05:48:10 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:49.283 05:48:10 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:07.0' ['trtype']='pcie') 00:07:49.283 05:48:10 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:49.283 05:48:10 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:49.283 05:48:10 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:49.283 05:48:10 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:49.283 05:48:10 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:49.283 05:48:10 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:49.283 05:48:10 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:49.283 05:48:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.283 05:48:10 -- common/autotest_common.sh@10 -- # set +x 00:07:49.283 ************************************ 00:07:49.283 START TEST dd_inflate_file 00:07:49.283 ************************************ 00:07:49.283 05:48:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:49.542 [2024-12-15 05:48:10.954613] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:49.542 [2024-12-15 05:48:10.954717] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70467 ] 00:07:49.542 [2024-12-15 05:48:11.085313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.542 [2024-12-15 05:48:11.125144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.801  [2024-12-15T05:48:11.442Z] Copying: 64/64 [MB] (average 1939 MBps) 00:07:49.801 00:07:49.801 00:07:49.801 real 0m0.446s 00:07:49.801 user 0m0.207s 00:07:49.801 sys 0m0.124s 00:07:49.801 05:48:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.801 ************************************ 00:07:49.801 END TEST dd_inflate_file 00:07:49.801 ************************************ 00:07:49.801 05:48:11 -- common/autotest_common.sh@10 -- # set +x 00:07:49.801 05:48:11 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:49.801 05:48:11 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:49.801 05:48:11 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:49.801 05:48:11 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:49.801 05:48:11 -- dd/common.sh@31 -- # xtrace_disable 00:07:49.801 05:48:11 -- common/autotest_common.sh@10 -- # set +x 00:07:49.801 05:48:11 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:49.801 05:48:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.801 05:48:11 -- common/autotest_common.sh@10 -- # set +x 00:07:49.801 ************************************ 00:07:49.801 START TEST dd_copy_to_out_bdev 00:07:49.801 ************************************ 00:07:49.801 05:48:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:50.060 [2024-12-15 05:48:11.455832] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:50.060 [2024-12-15 05:48:11.455966] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70497 ] 00:07:50.060 { 00:07:50.060 "subsystems": [ 00:07:50.060 { 00:07:50.060 "subsystem": "bdev", 00:07:50.060 "config": [ 00:07:50.060 { 00:07:50.060 "params": { 00:07:50.060 "trtype": "pcie", 00:07:50.060 "traddr": "0000:00:06.0", 00:07:50.060 "name": "Nvme0" 00:07:50.060 }, 00:07:50.060 "method": "bdev_nvme_attach_controller" 00:07:50.060 }, 00:07:50.060 { 00:07:50.060 "params": { 00:07:50.060 "trtype": "pcie", 00:07:50.060 "traddr": "0000:00:07.0", 00:07:50.060 "name": "Nvme1" 00:07:50.060 }, 00:07:50.060 "method": "bdev_nvme_attach_controller" 00:07:50.060 }, 00:07:50.060 { 00:07:50.060 "method": "bdev_wait_for_examine" 00:07:50.060 } 00:07:50.060 ] 00:07:50.060 } 00:07:50.060 ] 00:07:50.060 } 00:07:50.060 [2024-12-15 05:48:11.587424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.060 [2024-12-15 05:48:11.621395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.438  [2024-12-15T05:48:13.338Z] Copying: 48/64 [MB] (48 MBps) [2024-12-15T05:48:13.338Z] Copying: 64/64 [MB] (average 48 MBps) 00:07:51.697 00:07:51.697 00:07:51.697 real 0m1.908s 00:07:51.697 user 0m1.671s 00:07:51.697 sys 0m0.161s 00:07:51.697 05:48:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.697 05:48:13 -- common/autotest_common.sh@10 -- # set +x 00:07:51.697 ************************************ 00:07:51.697 END TEST dd_copy_to_out_bdev 00:07:51.697 ************************************ 00:07:51.956 05:48:13 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:51.956 05:48:13 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:51.956 05:48:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:51.956 05:48:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.956 05:48:13 -- common/autotest_common.sh@10 -- # set +x 00:07:51.956 ************************************ 00:07:51.956 START TEST dd_offset_magic 00:07:51.956 ************************************ 00:07:51.956 05:48:13 -- common/autotest_common.sh@1114 -- # offset_magic 00:07:51.956 05:48:13 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:51.956 05:48:13 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:51.956 05:48:13 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:51.956 05:48:13 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:51.956 05:48:13 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:51.956 05:48:13 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:51.956 05:48:13 -- dd/common.sh@31 -- # xtrace_disable 00:07:51.956 05:48:13 -- common/autotest_common.sh@10 -- # set +x 00:07:51.956 [2024-12-15 05:48:13.421287] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:51.956 [2024-12-15 05:48:13.421388] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70537 ] 00:07:51.956 { 00:07:51.956 "subsystems": [ 00:07:51.956 { 00:07:51.956 "subsystem": "bdev", 00:07:51.956 "config": [ 00:07:51.956 { 00:07:51.956 "params": { 00:07:51.956 "trtype": "pcie", 00:07:51.956 "traddr": "0000:00:06.0", 00:07:51.956 "name": "Nvme0" 00:07:51.956 }, 00:07:51.956 "method": "bdev_nvme_attach_controller" 00:07:51.956 }, 00:07:51.956 { 00:07:51.956 "params": { 00:07:51.956 "trtype": "pcie", 00:07:51.956 "traddr": "0000:00:07.0", 00:07:51.956 "name": "Nvme1" 00:07:51.956 }, 00:07:51.956 "method": "bdev_nvme_attach_controller" 00:07:51.956 }, 00:07:51.956 { 00:07:51.956 "method": "bdev_wait_for_examine" 00:07:51.956 } 00:07:51.956 ] 00:07:51.956 } 00:07:51.956 ] 00:07:51.956 } 00:07:51.956 [2024-12-15 05:48:13.555824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.216 [2024-12-15 05:48:13.596275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.216  [2024-12-15T05:48:14.116Z] Copying: 65/65 [MB] (average 802 MBps) 00:07:52.475 00:07:52.475 05:48:14 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:52.475 05:48:14 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:52.475 05:48:14 -- dd/common.sh@31 -- # xtrace_disable 00:07:52.475 05:48:14 -- common/autotest_common.sh@10 -- # set +x 00:07:52.475 [2024-12-15 05:48:14.068687] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:52.475 [2024-12-15 05:48:14.068786] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70557 ] 00:07:52.475 { 00:07:52.475 "subsystems": [ 00:07:52.475 { 00:07:52.475 "subsystem": "bdev", 00:07:52.475 "config": [ 00:07:52.475 { 00:07:52.475 "params": { 00:07:52.475 "trtype": "pcie", 00:07:52.475 "traddr": "0000:00:06.0", 00:07:52.475 "name": "Nvme0" 00:07:52.475 }, 00:07:52.475 "method": "bdev_nvme_attach_controller" 00:07:52.475 }, 00:07:52.475 { 00:07:52.475 "params": { 00:07:52.475 "trtype": "pcie", 00:07:52.475 "traddr": "0000:00:07.0", 00:07:52.475 "name": "Nvme1" 00:07:52.475 }, 00:07:52.475 "method": "bdev_nvme_attach_controller" 00:07:52.475 }, 00:07:52.475 { 00:07:52.475 "method": "bdev_wait_for_examine" 00:07:52.475 } 00:07:52.475 ] 00:07:52.475 } 00:07:52.475 ] 00:07:52.475 } 00:07:52.734 [2024-12-15 05:48:14.204030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.734 [2024-12-15 05:48:14.237219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.993  [2024-12-15T05:48:14.634Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:52.993 00:07:52.993 05:48:14 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:52.993 05:48:14 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:52.993 05:48:14 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:52.993 05:48:14 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:52.993 05:48:14 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:52.993 05:48:14 -- dd/common.sh@31 -- # xtrace_disable 00:07:52.993 05:48:14 -- common/autotest_common.sh@10 -- # set +x 00:07:52.993 [2024-12-15 05:48:14.620023] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:52.993 [2024-12-15 05:48:14.620116] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70566 ] 00:07:52.993 { 00:07:52.993 "subsystems": [ 00:07:52.993 { 00:07:52.993 "subsystem": "bdev", 00:07:52.993 "config": [ 00:07:52.993 { 00:07:52.993 "params": { 00:07:52.993 "trtype": "pcie", 00:07:52.993 "traddr": "0000:00:06.0", 00:07:52.993 "name": "Nvme0" 00:07:52.993 }, 00:07:52.993 "method": "bdev_nvme_attach_controller" 00:07:52.993 }, 00:07:52.993 { 00:07:52.993 "params": { 00:07:52.993 "trtype": "pcie", 00:07:52.993 "traddr": "0000:00:07.0", 00:07:52.993 "name": "Nvme1" 00:07:52.993 }, 00:07:52.993 "method": "bdev_nvme_attach_controller" 00:07:52.993 }, 00:07:52.993 { 00:07:52.993 "method": "bdev_wait_for_examine" 00:07:52.993 } 00:07:52.993 ] 00:07:52.993 } 00:07:52.993 ] 00:07:52.993 } 00:07:53.253 [2024-12-15 05:48:14.756427] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.253 [2024-12-15 05:48:14.792739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.512  [2024-12-15T05:48:15.412Z] Copying: 65/65 [MB] (average 955 MBps) 00:07:53.771 00:07:53.771 05:48:15 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:53.771 05:48:15 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:53.771 05:48:15 -- dd/common.sh@31 -- # xtrace_disable 00:07:53.771 05:48:15 -- common/autotest_common.sh@10 -- # set +x 00:07:53.771 [2024-12-15 05:48:15.251522] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:53.771 [2024-12-15 05:48:15.251628] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70586 ] 00:07:53.771 { 00:07:53.771 "subsystems": [ 00:07:53.771 { 00:07:53.771 "subsystem": "bdev", 00:07:53.771 "config": [ 00:07:53.771 { 00:07:53.771 "params": { 00:07:53.771 "trtype": "pcie", 00:07:53.771 "traddr": "0000:00:06.0", 00:07:53.771 "name": "Nvme0" 00:07:53.771 }, 00:07:53.771 "method": "bdev_nvme_attach_controller" 00:07:53.771 }, 00:07:53.771 { 00:07:53.771 "params": { 00:07:53.771 "trtype": "pcie", 00:07:53.771 "traddr": "0000:00:07.0", 00:07:53.771 "name": "Nvme1" 00:07:53.771 }, 00:07:53.771 "method": "bdev_nvme_attach_controller" 00:07:53.771 }, 00:07:53.771 { 00:07:53.771 "method": "bdev_wait_for_examine" 00:07:53.771 } 00:07:53.771 ] 00:07:53.771 } 00:07:53.771 ] 00:07:53.771 } 00:07:53.771 [2024-12-15 05:48:15.387166] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.030 [2024-12-15 05:48:15.420727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.030  [2024-12-15T05:48:15.930Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:54.289 00:07:54.289 ************************************ 00:07:54.289 END TEST dd_offset_magic 00:07:54.289 ************************************ 00:07:54.289 05:48:15 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:54.289 05:48:15 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:54.289 00:07:54.289 real 0m2.367s 00:07:54.289 user 0m1.689s 00:07:54.289 sys 0m0.480s 00:07:54.289 05:48:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:54.289 05:48:15 -- common/autotest_common.sh@10 -- # set +x 00:07:54.289 05:48:15 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:54.289 05:48:15 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:54.289 05:48:15 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:54.289 05:48:15 -- dd/common.sh@11 -- # local nvme_ref= 00:07:54.289 05:48:15 -- dd/common.sh@12 -- # local size=4194330 00:07:54.289 05:48:15 -- dd/common.sh@14 -- # local bs=1048576 00:07:54.289 05:48:15 -- dd/common.sh@15 -- # local count=5 00:07:54.289 05:48:15 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:54.289 05:48:15 -- dd/common.sh@18 -- # gen_conf 00:07:54.289 05:48:15 -- dd/common.sh@31 -- # xtrace_disable 00:07:54.289 05:48:15 -- common/autotest_common.sh@10 -- # set +x 00:07:54.289 [2024-12-15 05:48:15.827338] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:54.289 [2024-12-15 05:48:15.827429] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70617 ] 00:07:54.289 { 00:07:54.289 "subsystems": [ 00:07:54.289 { 00:07:54.289 "subsystem": "bdev", 00:07:54.289 "config": [ 00:07:54.289 { 00:07:54.289 "params": { 00:07:54.289 "trtype": "pcie", 00:07:54.289 "traddr": "0000:00:06.0", 00:07:54.289 "name": "Nvme0" 00:07:54.289 }, 00:07:54.289 "method": "bdev_nvme_attach_controller" 00:07:54.289 }, 00:07:54.289 { 00:07:54.289 "params": { 00:07:54.289 "trtype": "pcie", 00:07:54.289 "traddr": "0000:00:07.0", 00:07:54.289 "name": "Nvme1" 00:07:54.289 }, 00:07:54.289 "method": "bdev_nvme_attach_controller" 00:07:54.289 }, 00:07:54.289 { 00:07:54.289 "method": "bdev_wait_for_examine" 00:07:54.289 } 00:07:54.289 ] 00:07:54.289 } 00:07:54.289 ] 00:07:54.289 } 00:07:54.549 [2024-12-15 05:48:15.964308] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.549 [2024-12-15 05:48:15.998546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.549  [2024-12-15T05:48:16.449Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:07:54.808 00:07:54.808 05:48:16 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:54.808 05:48:16 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:54.808 05:48:16 -- dd/common.sh@11 -- # local nvme_ref= 00:07:54.808 05:48:16 -- dd/common.sh@12 -- # local size=4194330 00:07:54.808 05:48:16 -- dd/common.sh@14 -- # local bs=1048576 00:07:54.808 05:48:16 -- dd/common.sh@15 -- # local count=5 00:07:54.808 05:48:16 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:54.808 05:48:16 -- dd/common.sh@18 -- # gen_conf 00:07:54.808 05:48:16 -- dd/common.sh@31 -- # xtrace_disable 00:07:54.808 05:48:16 -- common/autotest_common.sh@10 -- # set +x 00:07:54.808 [2024-12-15 05:48:16.379380] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:54.808 [2024-12-15 05:48:16.379485] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70632 ] 00:07:54.808 { 00:07:54.808 "subsystems": [ 00:07:54.808 { 00:07:54.808 "subsystem": "bdev", 00:07:54.808 "config": [ 00:07:54.808 { 00:07:54.808 "params": { 00:07:54.808 "trtype": "pcie", 00:07:54.808 "traddr": "0000:00:06.0", 00:07:54.808 "name": "Nvme0" 00:07:54.808 }, 00:07:54.808 "method": "bdev_nvme_attach_controller" 00:07:54.808 }, 00:07:54.808 { 00:07:54.808 "params": { 00:07:54.808 "trtype": "pcie", 00:07:54.808 "traddr": "0000:00:07.0", 00:07:54.808 "name": "Nvme1" 00:07:54.808 }, 00:07:54.808 "method": "bdev_nvme_attach_controller" 00:07:54.808 }, 00:07:54.808 { 00:07:54.808 "method": "bdev_wait_for_examine" 00:07:54.808 } 00:07:54.808 ] 00:07:54.808 } 00:07:54.808 ] 00:07:54.808 } 00:07:55.067 [2024-12-15 05:48:16.516510] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.067 [2024-12-15 05:48:16.550788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.326  [2024-12-15T05:48:16.967Z] Copying: 5120/5120 [kB] (average 714 MBps) 00:07:55.326 00:07:55.326 05:48:16 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:55.326 00:07:55.326 real 0m6.191s 00:07:55.326 user 0m4.493s 00:07:55.326 sys 0m1.190s 00:07:55.326 05:48:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:55.326 05:48:16 -- common/autotest_common.sh@10 -- # set +x 00:07:55.326 ************************************ 00:07:55.326 END TEST spdk_dd_bdev_to_bdev 00:07:55.326 ************************************ 00:07:55.326 05:48:16 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:55.326 05:48:16 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:55.326 05:48:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:55.326 05:48:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:55.326 05:48:16 -- common/autotest_common.sh@10 -- # set +x 00:07:55.326 ************************************ 00:07:55.326 START TEST spdk_dd_uring 00:07:55.326 ************************************ 00:07:55.326 05:48:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:55.586 * Looking for test storage... 00:07:55.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:55.586 05:48:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:55.586 05:48:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:55.586 05:48:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:55.586 05:48:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:55.586 05:48:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:55.586 05:48:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:55.586 05:48:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:55.586 05:48:17 -- scripts/common.sh@335 -- # IFS=.-: 00:07:55.586 05:48:17 -- scripts/common.sh@335 -- # read -ra ver1 00:07:55.586 05:48:17 -- scripts/common.sh@336 -- # IFS=.-: 00:07:55.586 05:48:17 -- scripts/common.sh@336 -- # read -ra ver2 00:07:55.586 05:48:17 -- scripts/common.sh@337 -- # local 'op=<' 00:07:55.586 05:48:17 -- scripts/common.sh@339 -- # ver1_l=2 00:07:55.586 05:48:17 -- scripts/common.sh@340 -- # ver2_l=1 00:07:55.586 05:48:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:55.586 05:48:17 -- scripts/common.sh@343 -- # case "$op" in 00:07:55.586 05:48:17 -- scripts/common.sh@344 -- # : 1 00:07:55.586 05:48:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:55.586 05:48:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:55.586 05:48:17 -- scripts/common.sh@364 -- # decimal 1 00:07:55.586 05:48:17 -- scripts/common.sh@352 -- # local d=1 00:07:55.586 05:48:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:55.586 05:48:17 -- scripts/common.sh@354 -- # echo 1 00:07:55.586 05:48:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:55.586 05:48:17 -- scripts/common.sh@365 -- # decimal 2 00:07:55.586 05:48:17 -- scripts/common.sh@352 -- # local d=2 00:07:55.586 05:48:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:55.586 05:48:17 -- scripts/common.sh@354 -- # echo 2 00:07:55.586 05:48:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:55.586 05:48:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:55.586 05:48:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:55.586 05:48:17 -- scripts/common.sh@367 -- # return 0 00:07:55.586 05:48:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:55.586 05:48:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:55.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.586 --rc genhtml_branch_coverage=1 00:07:55.586 --rc genhtml_function_coverage=1 00:07:55.586 --rc genhtml_legend=1 00:07:55.586 --rc geninfo_all_blocks=1 00:07:55.586 --rc geninfo_unexecuted_blocks=1 00:07:55.586 00:07:55.586 ' 00:07:55.586 05:48:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:55.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.586 --rc genhtml_branch_coverage=1 00:07:55.586 --rc genhtml_function_coverage=1 00:07:55.586 --rc genhtml_legend=1 00:07:55.586 --rc geninfo_all_blocks=1 00:07:55.586 --rc geninfo_unexecuted_blocks=1 00:07:55.586 00:07:55.586 ' 00:07:55.586 05:48:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:55.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.586 --rc genhtml_branch_coverage=1 00:07:55.586 --rc genhtml_function_coverage=1 00:07:55.586 --rc genhtml_legend=1 00:07:55.586 --rc geninfo_all_blocks=1 00:07:55.586 --rc geninfo_unexecuted_blocks=1 00:07:55.586 00:07:55.586 ' 00:07:55.586 05:48:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:55.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.586 --rc genhtml_branch_coverage=1 00:07:55.586 --rc genhtml_function_coverage=1 00:07:55.586 --rc genhtml_legend=1 00:07:55.586 --rc geninfo_all_blocks=1 00:07:55.586 --rc geninfo_unexecuted_blocks=1 00:07:55.586 00:07:55.586 ' 00:07:55.586 05:48:17 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:55.586 05:48:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.586 05:48:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.586 05:48:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.586 05:48:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.586 05:48:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.586 05:48:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.586 05:48:17 -- paths/export.sh@5 -- # export PATH 00:07:55.586 05:48:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.586 05:48:17 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:55.586 05:48:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:55.586 05:48:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:55.586 05:48:17 -- common/autotest_common.sh@10 -- # set +x 00:07:55.586 ************************************ 00:07:55.586 START TEST dd_uring_copy 00:07:55.586 ************************************ 00:07:55.586 05:48:17 -- common/autotest_common.sh@1114 -- # uring_zram_copy 00:07:55.586 05:48:17 -- dd/uring.sh@15 -- # local zram_dev_id 00:07:55.586 05:48:17 -- dd/uring.sh@16 -- # local magic 00:07:55.586 05:48:17 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:55.586 05:48:17 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:55.586 05:48:17 -- dd/uring.sh@19 -- # local verify_magic 00:07:55.586 05:48:17 -- dd/uring.sh@21 -- # init_zram 00:07:55.586 05:48:17 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:07:55.586 05:48:17 -- dd/common.sh@164 -- # return 00:07:55.586 05:48:17 -- dd/uring.sh@22 -- # create_zram_dev 00:07:55.586 05:48:17 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:07:55.587 05:48:17 -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:55.587 05:48:17 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:55.587 05:48:17 -- dd/common.sh@181 -- # local id=1 00:07:55.587 05:48:17 -- dd/common.sh@182 -- # local size=512M 00:07:55.587 05:48:17 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:07:55.587 05:48:17 -- dd/common.sh@186 -- # echo 512M 00:07:55.587 05:48:17 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:55.587 05:48:17 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:55.587 05:48:17 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:55.587 05:48:17 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:55.587 05:48:17 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:55.587 05:48:17 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:55.587 05:48:17 -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:55.587 05:48:17 -- dd/common.sh@98 -- # xtrace_disable 00:07:55.587 05:48:17 -- common/autotest_common.sh@10 -- # set +x 00:07:55.587 05:48:17 -- dd/uring.sh@41 -- # magic=clk4lb7yiqt92kpxkdxabxkgibyor0dwqhlx1huj2z1kjgcbysosz0s2exk1vq65xon9z5s39wt1b43utzfigwn1lljg84cuk2qs53hlw9fze2olo0wm80cqexerzy92atq2nf0o9x1mer3yhc9j42tnyo1g48mhtj3nakfdn48lu9522u0ym783vohvzp6ysshgqtescqq6z3v839tf9aay296b3vid83etoajhfh6zhgcomf9hgff5l478zm6nqys12xemqud6k5ec7gecae17ejfdmjbp9fxb4u19f5s96oiuah60of06lczwvn36to9ku3e4nwd7uji3gj2buvwhzxkcu6jpu3if1wb4umgsx5e5kdjtd08k42ctn84vcqziat267sgleghb4fwvyslp4nsxe5z5ykgccfrcibdbwgk9h4ts9nywjszbel4pmne8rkgbrhgx5ckorukau492umi34atkbu116rmys7g60xq7rys5qtty8k9ggec2znkq1nyy3qjtd03dx1va653j3qbipewnp5n7jxkcs7c1ywpv8l2al4y2mdww14qyobn2gze0k8uubr27ursco1jdwq5dl2ycup2jemnnderth4fl2g05068vapj2y4qsmvdncxw2dtk0afp05in5usux6817kc1rkikxh1qd0ym8a3w0sfik3weqmtjo5l6jymof0lhq8ju1jiiahxd05oxlcofqzzngdi6opf3qhgi5hdgg1000sk20fn48rjn50y9z8fkgz69fcoic5w1d5erw1ht4c97mbt96f3zw5iust4jdjng9kbm0b2kwzjfleyvyd5xufoyhh8gvpy93z7nmzcrzrvpb41rsbs71jwuw57cevthywl814q64x05tvio5b4flf5s7o23mzh4c1zylckolvjso6k3p5tssez60z0gu2qt62apo061w0ly1489q3hik13agafnqyyxfqb0ktbtv68hz21e5r9vlwropyk9zl2j28xskiedopr5w 00:07:55.587 05:48:17 -- dd/uring.sh@42 -- # echo clk4lb7yiqt92kpxkdxabxkgibyor0dwqhlx1huj2z1kjgcbysosz0s2exk1vq65xon9z5s39wt1b43utzfigwn1lljg84cuk2qs53hlw9fze2olo0wm80cqexerzy92atq2nf0o9x1mer3yhc9j42tnyo1g48mhtj3nakfdn48lu9522u0ym783vohvzp6ysshgqtescqq6z3v839tf9aay296b3vid83etoajhfh6zhgcomf9hgff5l478zm6nqys12xemqud6k5ec7gecae17ejfdmjbp9fxb4u19f5s96oiuah60of06lczwvn36to9ku3e4nwd7uji3gj2buvwhzxkcu6jpu3if1wb4umgsx5e5kdjtd08k42ctn84vcqziat267sgleghb4fwvyslp4nsxe5z5ykgccfrcibdbwgk9h4ts9nywjszbel4pmne8rkgbrhgx5ckorukau492umi34atkbu116rmys7g60xq7rys5qtty8k9ggec2znkq1nyy3qjtd03dx1va653j3qbipewnp5n7jxkcs7c1ywpv8l2al4y2mdww14qyobn2gze0k8uubr27ursco1jdwq5dl2ycup2jemnnderth4fl2g05068vapj2y4qsmvdncxw2dtk0afp05in5usux6817kc1rkikxh1qd0ym8a3w0sfik3weqmtjo5l6jymof0lhq8ju1jiiahxd05oxlcofqzzngdi6opf3qhgi5hdgg1000sk20fn48rjn50y9z8fkgz69fcoic5w1d5erw1ht4c97mbt96f3zw5iust4jdjng9kbm0b2kwzjfleyvyd5xufoyhh8gvpy93z7nmzcrzrvpb41rsbs71jwuw57cevthywl814q64x05tvio5b4flf5s7o23mzh4c1zylckolvjso6k3p5tssez60z0gu2qt62apo061w0ly1489q3hik13agafnqyyxfqb0ktbtv68hz21e5r9vlwropyk9zl2j28xskiedopr5w 00:07:55.587 05:48:17 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:55.587 [2024-12-15 05:48:17.192088] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:55.587 [2024-12-15 05:48:17.192180] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70708 ] 00:07:55.846 [2024-12-15 05:48:17.327791] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.846 [2024-12-15 05:48:17.358173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.414  [2024-12-15T05:48:18.055Z] Copying: 511/511 [MB] (average 1706 MBps) 00:07:56.414 00:07:56.414 05:48:18 -- dd/uring.sh@54 -- # gen_conf 00:07:56.414 05:48:18 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:56.414 05:48:18 -- dd/common.sh@31 -- # xtrace_disable 00:07:56.414 05:48:18 -- common/autotest_common.sh@10 -- # set +x 00:07:56.673 { 00:07:56.673 "subsystems": [ 00:07:56.673 { 00:07:56.673 "subsystem": "bdev", 00:07:56.673 "config": [ 00:07:56.673 { 00:07:56.673 "params": { 00:07:56.673 "block_size": 512, 00:07:56.673 "num_blocks": 1048576, 00:07:56.673 "name": "malloc0" 00:07:56.673 }, 00:07:56.673 "method": "bdev_malloc_create" 00:07:56.673 }, 00:07:56.673 { 00:07:56.673 "params": { 00:07:56.673 "filename": "/dev/zram1", 00:07:56.673 "name": "uring0" 00:07:56.673 }, 00:07:56.673 "method": "bdev_uring_create" 00:07:56.673 }, 00:07:56.673 { 00:07:56.673 "method": "bdev_wait_for_examine" 00:07:56.673 } 00:07:56.673 ] 00:07:56.673 } 00:07:56.673 ] 00:07:56.673 } 00:07:56.673 [2024-12-15 05:48:18.070001] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:56.673 [2024-12-15 05:48:18.070137] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70722 ] 00:07:56.673 [2024-12-15 05:48:18.216344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.673 [2024-12-15 05:48:18.249221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.052  [2024-12-15T05:48:20.630Z] Copying: 229/512 [MB] (229 MBps) [2024-12-15T05:48:20.630Z] Copying: 460/512 [MB] (230 MBps) [2024-12-15T05:48:20.889Z] Copying: 512/512 [MB] (average 231 MBps) 00:07:59.248 00:07:59.248 05:48:20 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:59.248 05:48:20 -- dd/uring.sh@60 -- # gen_conf 00:07:59.248 05:48:20 -- dd/common.sh@31 -- # xtrace_disable 00:07:59.248 05:48:20 -- common/autotest_common.sh@10 -- # set +x 00:07:59.507 [2024-12-15 05:48:20.892247] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:59.507 [2024-12-15 05:48:20.892348] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70754 ] 00:07:59.507 { 00:07:59.507 "subsystems": [ 00:07:59.507 { 00:07:59.507 "subsystem": "bdev", 00:07:59.507 "config": [ 00:07:59.507 { 00:07:59.507 "params": { 00:07:59.507 "block_size": 512, 00:07:59.507 "num_blocks": 1048576, 00:07:59.507 "name": "malloc0" 00:07:59.507 }, 00:07:59.507 "method": "bdev_malloc_create" 00:07:59.507 }, 00:07:59.507 { 00:07:59.507 "params": { 00:07:59.507 "filename": "/dev/zram1", 00:07:59.507 "name": "uring0" 00:07:59.507 }, 00:07:59.507 "method": "bdev_uring_create" 00:07:59.507 }, 00:07:59.507 { 00:07:59.507 "method": "bdev_wait_for_examine" 00:07:59.507 } 00:07:59.507 ] 00:07:59.507 } 00:07:59.507 ] 00:07:59.507 } 00:07:59.507 [2024-12-15 05:48:21.029672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.507 [2024-12-15 05:48:21.059843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.885  [2024-12-15T05:48:23.463Z] Copying: 161/512 [MB] (161 MBps) [2024-12-15T05:48:24.399Z] Copying: 310/512 [MB] (148 MBps) [2024-12-15T05:48:24.661Z] Copying: 465/512 [MB] (155 MBps) [2024-12-15T05:48:24.920Z] Copying: 512/512 [MB] (average 152 MBps) 00:08:03.280 00:08:03.280 05:48:24 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:03.280 05:48:24 -- dd/uring.sh@66 -- # [[ clk4lb7yiqt92kpxkdxabxkgibyor0dwqhlx1huj2z1kjgcbysosz0s2exk1vq65xon9z5s39wt1b43utzfigwn1lljg84cuk2qs53hlw9fze2olo0wm80cqexerzy92atq2nf0o9x1mer3yhc9j42tnyo1g48mhtj3nakfdn48lu9522u0ym783vohvzp6ysshgqtescqq6z3v839tf9aay296b3vid83etoajhfh6zhgcomf9hgff5l478zm6nqys12xemqud6k5ec7gecae17ejfdmjbp9fxb4u19f5s96oiuah60of06lczwvn36to9ku3e4nwd7uji3gj2buvwhzxkcu6jpu3if1wb4umgsx5e5kdjtd08k42ctn84vcqziat267sgleghb4fwvyslp4nsxe5z5ykgccfrcibdbwgk9h4ts9nywjszbel4pmne8rkgbrhgx5ckorukau492umi34atkbu116rmys7g60xq7rys5qtty8k9ggec2znkq1nyy3qjtd03dx1va653j3qbipewnp5n7jxkcs7c1ywpv8l2al4y2mdww14qyobn2gze0k8uubr27ursco1jdwq5dl2ycup2jemnnderth4fl2g05068vapj2y4qsmvdncxw2dtk0afp05in5usux6817kc1rkikxh1qd0ym8a3w0sfik3weqmtjo5l6jymof0lhq8ju1jiiahxd05oxlcofqzzngdi6opf3qhgi5hdgg1000sk20fn48rjn50y9z8fkgz69fcoic5w1d5erw1ht4c97mbt96f3zw5iust4jdjng9kbm0b2kwzjfleyvyd5xufoyhh8gvpy93z7nmzcrzrvpb41rsbs71jwuw57cevthywl814q64x05tvio5b4flf5s7o23mzh4c1zylckolvjso6k3p5tssez60z0gu2qt62apo061w0ly1489q3hik13agafnqyyxfqb0ktbtv68hz21e5r9vlwropyk9zl2j28xskiedopr5w == \c\l\k\4\l\b\7\y\i\q\t\9\2\k\p\x\k\d\x\a\b\x\k\g\i\b\y\o\r\0\d\w\q\h\l\x\1\h\u\j\2\z\1\k\j\g\c\b\y\s\o\s\z\0\s\2\e\x\k\1\v\q\6\5\x\o\n\9\z\5\s\3\9\w\t\1\b\4\3\u\t\z\f\i\g\w\n\1\l\l\j\g\8\4\c\u\k\2\q\s\5\3\h\l\w\9\f\z\e\2\o\l\o\0\w\m\8\0\c\q\e\x\e\r\z\y\9\2\a\t\q\2\n\f\0\o\9\x\1\m\e\r\3\y\h\c\9\j\4\2\t\n\y\o\1\g\4\8\m\h\t\j\3\n\a\k\f\d\n\4\8\l\u\9\5\2\2\u\0\y\m\7\8\3\v\o\h\v\z\p\6\y\s\s\h\g\q\t\e\s\c\q\q\6\z\3\v\8\3\9\t\f\9\a\a\y\2\9\6\b\3\v\i\d\8\3\e\t\o\a\j\h\f\h\6\z\h\g\c\o\m\f\9\h\g\f\f\5\l\4\7\8\z\m\6\n\q\y\s\1\2\x\e\m\q\u\d\6\k\5\e\c\7\g\e\c\a\e\1\7\e\j\f\d\m\j\b\p\9\f\x\b\4\u\1\9\f\5\s\9\6\o\i\u\a\h\6\0\o\f\0\6\l\c\z\w\v\n\3\6\t\o\9\k\u\3\e\4\n\w\d\7\u\j\i\3\g\j\2\b\u\v\w\h\z\x\k\c\u\6\j\p\u\3\i\f\1\w\b\4\u\m\g\s\x\5\e\5\k\d\j\t\d\0\8\k\4\2\c\t\n\8\4\v\c\q\z\i\a\t\2\6\7\s\g\l\e\g\h\b\4\f\w\v\y\s\l\p\4\n\s\x\e\5\z\5\y\k\g\c\c\f\r\c\i\b\d\b\w\g\k\9\h\4\t\s\9\n\y\w\j\s\z\b\e\l\4\p\m\n\e\8\r\k\g\b\r\h\g\x\5\c\k\o\r\u\k\a\u\4\9\2\u\m\i\3\4\a\t\k\b\u\1\1\6\r\m\y\s\7\g\6\0\x\q\7\r\y\s\5\q\t\t\y\8\k\9\g\g\e\c\2\z\n\k\q\1\n\y\y\3\q\j\t\d\0\3\d\x\1\v\a\6\5\3\j\3\q\b\i\p\e\w\n\p\5\n\7\j\x\k\c\s\7\c\1\y\w\p\v\8\l\2\a\l\4\y\2\m\d\w\w\1\4\q\y\o\b\n\2\g\z\e\0\k\8\u\u\b\r\2\7\u\r\s\c\o\1\j\d\w\q\5\d\l\2\y\c\u\p\2\j\e\m\n\n\d\e\r\t\h\4\f\l\2\g\0\5\0\6\8\v\a\p\j\2\y\4\q\s\m\v\d\n\c\x\w\2\d\t\k\0\a\f\p\0\5\i\n\5\u\s\u\x\6\8\1\7\k\c\1\r\k\i\k\x\h\1\q\d\0\y\m\8\a\3\w\0\s\f\i\k\3\w\e\q\m\t\j\o\5\l\6\j\y\m\o\f\0\l\h\q\8\j\u\1\j\i\i\a\h\x\d\0\5\o\x\l\c\o\f\q\z\z\n\g\d\i\6\o\p\f\3\q\h\g\i\5\h\d\g\g\1\0\0\0\s\k\2\0\f\n\4\8\r\j\n\5\0\y\9\z\8\f\k\g\z\6\9\f\c\o\i\c\5\w\1\d\5\e\r\w\1\h\t\4\c\9\7\m\b\t\9\6\f\3\z\w\5\i\u\s\t\4\j\d\j\n\g\9\k\b\m\0\b\2\k\w\z\j\f\l\e\y\v\y\d\5\x\u\f\o\y\h\h\8\g\v\p\y\9\3\z\7\n\m\z\c\r\z\r\v\p\b\4\1\r\s\b\s\7\1\j\w\u\w\5\7\c\e\v\t\h\y\w\l\8\1\4\q\6\4\x\0\5\t\v\i\o\5\b\4\f\l\f\5\s\7\o\2\3\m\z\h\4\c\1\z\y\l\c\k\o\l\v\j\s\o\6\k\3\p\5\t\s\s\e\z\6\0\z\0\g\u\2\q\t\6\2\a\p\o\0\6\1\w\0\l\y\1\4\8\9\q\3\h\i\k\1\3\a\g\a\f\n\q\y\y\x\f\q\b\0\k\t\b\t\v\6\8\h\z\2\1\e\5\r\9\v\l\w\r\o\p\y\k\9\z\l\2\j\2\8\x\s\k\i\e\d\o\p\r\5\w ]] 00:08:03.280 05:48:24 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:03.280 05:48:24 -- dd/uring.sh@69 -- # [[ clk4lb7yiqt92kpxkdxabxkgibyor0dwqhlx1huj2z1kjgcbysosz0s2exk1vq65xon9z5s39wt1b43utzfigwn1lljg84cuk2qs53hlw9fze2olo0wm80cqexerzy92atq2nf0o9x1mer3yhc9j42tnyo1g48mhtj3nakfdn48lu9522u0ym783vohvzp6ysshgqtescqq6z3v839tf9aay296b3vid83etoajhfh6zhgcomf9hgff5l478zm6nqys12xemqud6k5ec7gecae17ejfdmjbp9fxb4u19f5s96oiuah60of06lczwvn36to9ku3e4nwd7uji3gj2buvwhzxkcu6jpu3if1wb4umgsx5e5kdjtd08k42ctn84vcqziat267sgleghb4fwvyslp4nsxe5z5ykgccfrcibdbwgk9h4ts9nywjszbel4pmne8rkgbrhgx5ckorukau492umi34atkbu116rmys7g60xq7rys5qtty8k9ggec2znkq1nyy3qjtd03dx1va653j3qbipewnp5n7jxkcs7c1ywpv8l2al4y2mdww14qyobn2gze0k8uubr27ursco1jdwq5dl2ycup2jemnnderth4fl2g05068vapj2y4qsmvdncxw2dtk0afp05in5usux6817kc1rkikxh1qd0ym8a3w0sfik3weqmtjo5l6jymof0lhq8ju1jiiahxd05oxlcofqzzngdi6opf3qhgi5hdgg1000sk20fn48rjn50y9z8fkgz69fcoic5w1d5erw1ht4c97mbt96f3zw5iust4jdjng9kbm0b2kwzjfleyvyd5xufoyhh8gvpy93z7nmzcrzrvpb41rsbs71jwuw57cevthywl814q64x05tvio5b4flf5s7o23mzh4c1zylckolvjso6k3p5tssez60z0gu2qt62apo061w0ly1489q3hik13agafnqyyxfqb0ktbtv68hz21e5r9vlwropyk9zl2j28xskiedopr5w == \c\l\k\4\l\b\7\y\i\q\t\9\2\k\p\x\k\d\x\a\b\x\k\g\i\b\y\o\r\0\d\w\q\h\l\x\1\h\u\j\2\z\1\k\j\g\c\b\y\s\o\s\z\0\s\2\e\x\k\1\v\q\6\5\x\o\n\9\z\5\s\3\9\w\t\1\b\4\3\u\t\z\f\i\g\w\n\1\l\l\j\g\8\4\c\u\k\2\q\s\5\3\h\l\w\9\f\z\e\2\o\l\o\0\w\m\8\0\c\q\e\x\e\r\z\y\9\2\a\t\q\2\n\f\0\o\9\x\1\m\e\r\3\y\h\c\9\j\4\2\t\n\y\o\1\g\4\8\m\h\t\j\3\n\a\k\f\d\n\4\8\l\u\9\5\2\2\u\0\y\m\7\8\3\v\o\h\v\z\p\6\y\s\s\h\g\q\t\e\s\c\q\q\6\z\3\v\8\3\9\t\f\9\a\a\y\2\9\6\b\3\v\i\d\8\3\e\t\o\a\j\h\f\h\6\z\h\g\c\o\m\f\9\h\g\f\f\5\l\4\7\8\z\m\6\n\q\y\s\1\2\x\e\m\q\u\d\6\k\5\e\c\7\g\e\c\a\e\1\7\e\j\f\d\m\j\b\p\9\f\x\b\4\u\1\9\f\5\s\9\6\o\i\u\a\h\6\0\o\f\0\6\l\c\z\w\v\n\3\6\t\o\9\k\u\3\e\4\n\w\d\7\u\j\i\3\g\j\2\b\u\v\w\h\z\x\k\c\u\6\j\p\u\3\i\f\1\w\b\4\u\m\g\s\x\5\e\5\k\d\j\t\d\0\8\k\4\2\c\t\n\8\4\v\c\q\z\i\a\t\2\6\7\s\g\l\e\g\h\b\4\f\w\v\y\s\l\p\4\n\s\x\e\5\z\5\y\k\g\c\c\f\r\c\i\b\d\b\w\g\k\9\h\4\t\s\9\n\y\w\j\s\z\b\e\l\4\p\m\n\e\8\r\k\g\b\r\h\g\x\5\c\k\o\r\u\k\a\u\4\9\2\u\m\i\3\4\a\t\k\b\u\1\1\6\r\m\y\s\7\g\6\0\x\q\7\r\y\s\5\q\t\t\y\8\k\9\g\g\e\c\2\z\n\k\q\1\n\y\y\3\q\j\t\d\0\3\d\x\1\v\a\6\5\3\j\3\q\b\i\p\e\w\n\p\5\n\7\j\x\k\c\s\7\c\1\y\w\p\v\8\l\2\a\l\4\y\2\m\d\w\w\1\4\q\y\o\b\n\2\g\z\e\0\k\8\u\u\b\r\2\7\u\r\s\c\o\1\j\d\w\q\5\d\l\2\y\c\u\p\2\j\e\m\n\n\d\e\r\t\h\4\f\l\2\g\0\5\0\6\8\v\a\p\j\2\y\4\q\s\m\v\d\n\c\x\w\2\d\t\k\0\a\f\p\0\5\i\n\5\u\s\u\x\6\8\1\7\k\c\1\r\k\i\k\x\h\1\q\d\0\y\m\8\a\3\w\0\s\f\i\k\3\w\e\q\m\t\j\o\5\l\6\j\y\m\o\f\0\l\h\q\8\j\u\1\j\i\i\a\h\x\d\0\5\o\x\l\c\o\f\q\z\z\n\g\d\i\6\o\p\f\3\q\h\g\i\5\h\d\g\g\1\0\0\0\s\k\2\0\f\n\4\8\r\j\n\5\0\y\9\z\8\f\k\g\z\6\9\f\c\o\i\c\5\w\1\d\5\e\r\w\1\h\t\4\c\9\7\m\b\t\9\6\f\3\z\w\5\i\u\s\t\4\j\d\j\n\g\9\k\b\m\0\b\2\k\w\z\j\f\l\e\y\v\y\d\5\x\u\f\o\y\h\h\8\g\v\p\y\9\3\z\7\n\m\z\c\r\z\r\v\p\b\4\1\r\s\b\s\7\1\j\w\u\w\5\7\c\e\v\t\h\y\w\l\8\1\4\q\6\4\x\0\5\t\v\i\o\5\b\4\f\l\f\5\s\7\o\2\3\m\z\h\4\c\1\z\y\l\c\k\o\l\v\j\s\o\6\k\3\p\5\t\s\s\e\z\6\0\z\0\g\u\2\q\t\6\2\a\p\o\0\6\1\w\0\l\y\1\4\8\9\q\3\h\i\k\1\3\a\g\a\f\n\q\y\y\x\f\q\b\0\k\t\b\t\v\6\8\h\z\2\1\e\5\r\9\v\l\w\r\o\p\y\k\9\z\l\2\j\2\8\x\s\k\i\e\d\o\p\r\5\w ]] 00:08:03.280 05:48:24 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:03.539 05:48:25 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:03.539 05:48:25 -- dd/uring.sh@75 -- # gen_conf 00:08:03.539 05:48:25 -- dd/common.sh@31 -- # xtrace_disable 00:08:03.539 05:48:25 -- common/autotest_common.sh@10 -- # set +x 00:08:03.797 [2024-12-15 05:48:25.210938] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:03.797 [2024-12-15 05:48:25.211029] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70826 ] 00:08:03.797 { 00:08:03.797 "subsystems": [ 00:08:03.797 { 00:08:03.797 "subsystem": "bdev", 00:08:03.797 "config": [ 00:08:03.797 { 00:08:03.797 "params": { 00:08:03.797 "block_size": 512, 00:08:03.797 "num_blocks": 1048576, 00:08:03.797 "name": "malloc0" 00:08:03.797 }, 00:08:03.797 "method": "bdev_malloc_create" 00:08:03.797 }, 00:08:03.797 { 00:08:03.797 "params": { 00:08:03.797 "filename": "/dev/zram1", 00:08:03.797 "name": "uring0" 00:08:03.797 }, 00:08:03.797 "method": "bdev_uring_create" 00:08:03.797 }, 00:08:03.797 { 00:08:03.797 "method": "bdev_wait_for_examine" 00:08:03.797 } 00:08:03.797 ] 00:08:03.797 } 00:08:03.797 ] 00:08:03.797 } 00:08:03.797 [2024-12-15 05:48:25.342036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.797 [2024-12-15 05:48:25.373060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.176  [2024-12-15T05:48:27.782Z] Copying: 174/512 [MB] (174 MBps) [2024-12-15T05:48:28.721Z] Copying: 346/512 [MB] (172 MBps) [2024-12-15T05:48:28.721Z] Copying: 512/512 [MB] (average 173 MBps) 00:08:07.080 00:08:07.080 05:48:28 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:07.080 05:48:28 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:07.080 05:48:28 -- dd/uring.sh@87 -- # : 00:08:07.080 05:48:28 -- dd/uring.sh@87 -- # : 00:08:07.080 05:48:28 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:07.080 05:48:28 -- dd/uring.sh@87 -- # gen_conf 00:08:07.080 05:48:28 -- dd/common.sh@31 -- # xtrace_disable 00:08:07.080 05:48:28 -- common/autotest_common.sh@10 -- # set +x 00:08:07.080 [2024-12-15 05:48:28.708161] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:07.080 [2024-12-15 05:48:28.708259] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70876 ] 00:08:07.339 { 00:08:07.339 "subsystems": [ 00:08:07.339 { 00:08:07.339 "subsystem": "bdev", 00:08:07.339 "config": [ 00:08:07.339 { 00:08:07.339 "params": { 00:08:07.339 "block_size": 512, 00:08:07.339 "num_blocks": 1048576, 00:08:07.339 "name": "malloc0" 00:08:07.339 }, 00:08:07.339 "method": "bdev_malloc_create" 00:08:07.339 }, 00:08:07.339 { 00:08:07.339 "params": { 00:08:07.339 "filename": "/dev/zram1", 00:08:07.339 "name": "uring0" 00:08:07.339 }, 00:08:07.339 "method": "bdev_uring_create" 00:08:07.339 }, 00:08:07.339 { 00:08:07.339 "params": { 00:08:07.339 "name": "uring0" 00:08:07.339 }, 00:08:07.339 "method": "bdev_uring_delete" 00:08:07.339 }, 00:08:07.339 { 00:08:07.339 "method": "bdev_wait_for_examine" 00:08:07.339 } 00:08:07.339 ] 00:08:07.339 } 00:08:07.339 ] 00:08:07.339 } 00:08:07.339 [2024-12-15 05:48:28.839209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.339 [2024-12-15 05:48:28.873450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.598  [2024-12-15T05:48:29.498Z] Copying: 0/0 [B] (average 0 Bps) 00:08:07.857 00:08:07.857 05:48:29 -- dd/uring.sh@94 -- # : 00:08:07.857 05:48:29 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:07.857 05:48:29 -- dd/uring.sh@94 -- # gen_conf 00:08:07.857 05:48:29 -- common/autotest_common.sh@650 -- # local es=0 00:08:07.857 05:48:29 -- dd/common.sh@31 -- # xtrace_disable 00:08:07.857 05:48:29 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:07.857 05:48:29 -- common/autotest_common.sh@10 -- # set +x 00:08:07.857 05:48:29 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.857 05:48:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.857 05:48:29 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.857 05:48:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.857 05:48:29 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.857 05:48:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.857 05:48:29 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.857 05:48:29 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:07.857 05:48:29 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:07.857 [2024-12-15 05:48:29.324504] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:07.857 [2024-12-15 05:48:29.324593] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70901 ] 00:08:07.857 { 00:08:07.857 "subsystems": [ 00:08:07.857 { 00:08:07.857 "subsystem": "bdev", 00:08:07.857 "config": [ 00:08:07.857 { 00:08:07.857 "params": { 00:08:07.857 "block_size": 512, 00:08:07.857 "num_blocks": 1048576, 00:08:07.857 "name": "malloc0" 00:08:07.857 }, 00:08:07.857 "method": "bdev_malloc_create" 00:08:07.857 }, 00:08:07.857 { 00:08:07.857 "params": { 00:08:07.857 "filename": "/dev/zram1", 00:08:07.857 "name": "uring0" 00:08:07.857 }, 00:08:07.857 "method": "bdev_uring_create" 00:08:07.857 }, 00:08:07.857 { 00:08:07.857 "params": { 00:08:07.857 "name": "uring0" 00:08:07.857 }, 00:08:07.857 "method": "bdev_uring_delete" 00:08:07.857 }, 00:08:07.857 { 00:08:07.857 "method": "bdev_wait_for_examine" 00:08:07.857 } 00:08:07.857 ] 00:08:07.857 } 00:08:07.857 ] 00:08:07.857 } 00:08:07.857 [2024-12-15 05:48:29.457763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.117 [2024-12-15 05:48:29.498423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.117 [2024-12-15 05:48:29.661114] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:08.117 [2024-12-15 05:48:29.661213] spdk_dd.c: 932:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:08.117 [2024-12-15 05:48:29.661229] spdk_dd.c:1074:dd_run: *ERROR*: uring0: No such device 00:08:08.117 [2024-12-15 05:48:29.661242] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.376 [2024-12-15 05:48:29.850507] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:08.376 05:48:29 -- common/autotest_common.sh@653 -- # es=237 00:08:08.376 05:48:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:08.376 05:48:29 -- common/autotest_common.sh@662 -- # es=109 00:08:08.376 05:48:29 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:08.376 05:48:29 -- common/autotest_common.sh@670 -- # es=1 00:08:08.376 05:48:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:08.376 05:48:29 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:08.376 05:48:29 -- dd/common.sh@172 -- # local id=1 00:08:08.376 05:48:29 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:08:08.376 05:48:29 -- dd/common.sh@176 -- # echo 1 00:08:08.376 05:48:29 -- dd/common.sh@177 -- # echo 1 00:08:08.376 05:48:29 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:08.635 00:08:08.635 real 0m13.057s 00:08:08.635 user 0m7.404s 00:08:08.635 sys 0m5.030s 00:08:08.635 05:48:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:08.635 05:48:30 -- common/autotest_common.sh@10 -- # set +x 00:08:08.635 ************************************ 00:08:08.635 END TEST dd_uring_copy 00:08:08.635 ************************************ 00:08:08.635 00:08:08.635 real 0m13.273s 00:08:08.635 user 0m7.534s 00:08:08.635 sys 0m5.120s 00:08:08.635 05:48:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:08.635 05:48:30 -- common/autotest_common.sh@10 -- # set +x 00:08:08.635 ************************************ 00:08:08.635 END TEST spdk_dd_uring 00:08:08.635 ************************************ 00:08:08.635 05:48:30 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:08.635 05:48:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:08.635 05:48:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.635 05:48:30 -- common/autotest_common.sh@10 -- # set +x 00:08:08.635 ************************************ 00:08:08.635 START TEST spdk_dd_sparse 00:08:08.635 ************************************ 00:08:08.635 05:48:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:08.894 * Looking for test storage... 00:08:08.894 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:08.894 05:48:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:08.894 05:48:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:08.894 05:48:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:08.894 05:48:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:08.894 05:48:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:08.894 05:48:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:08.894 05:48:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:08.894 05:48:30 -- scripts/common.sh@335 -- # IFS=.-: 00:08:08.894 05:48:30 -- scripts/common.sh@335 -- # read -ra ver1 00:08:08.894 05:48:30 -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.894 05:48:30 -- scripts/common.sh@336 -- # read -ra ver2 00:08:08.894 05:48:30 -- scripts/common.sh@337 -- # local 'op=<' 00:08:08.894 05:48:30 -- scripts/common.sh@339 -- # ver1_l=2 00:08:08.894 05:48:30 -- scripts/common.sh@340 -- # ver2_l=1 00:08:08.894 05:48:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:08.894 05:48:30 -- scripts/common.sh@343 -- # case "$op" in 00:08:08.894 05:48:30 -- scripts/common.sh@344 -- # : 1 00:08:08.894 05:48:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:08.894 05:48:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.894 05:48:30 -- scripts/common.sh@364 -- # decimal 1 00:08:08.894 05:48:30 -- scripts/common.sh@352 -- # local d=1 00:08:08.894 05:48:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.894 05:48:30 -- scripts/common.sh@354 -- # echo 1 00:08:08.894 05:48:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:08.894 05:48:30 -- scripts/common.sh@365 -- # decimal 2 00:08:08.894 05:48:30 -- scripts/common.sh@352 -- # local d=2 00:08:08.894 05:48:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.894 05:48:30 -- scripts/common.sh@354 -- # echo 2 00:08:08.894 05:48:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:08.895 05:48:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:08.895 05:48:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:08.895 05:48:30 -- scripts/common.sh@367 -- # return 0 00:08:08.895 05:48:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.895 05:48:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:08.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.895 --rc genhtml_branch_coverage=1 00:08:08.895 --rc genhtml_function_coverage=1 00:08:08.895 --rc genhtml_legend=1 00:08:08.895 --rc geninfo_all_blocks=1 00:08:08.895 --rc geninfo_unexecuted_blocks=1 00:08:08.895 00:08:08.895 ' 00:08:08.895 05:48:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:08.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.895 --rc genhtml_branch_coverage=1 00:08:08.895 --rc genhtml_function_coverage=1 00:08:08.895 --rc genhtml_legend=1 00:08:08.895 --rc geninfo_all_blocks=1 00:08:08.895 --rc geninfo_unexecuted_blocks=1 00:08:08.895 00:08:08.895 ' 00:08:08.895 05:48:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:08.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.895 --rc genhtml_branch_coverage=1 00:08:08.895 --rc genhtml_function_coverage=1 00:08:08.895 --rc genhtml_legend=1 00:08:08.895 --rc geninfo_all_blocks=1 00:08:08.895 --rc geninfo_unexecuted_blocks=1 00:08:08.895 00:08:08.895 ' 00:08:08.895 05:48:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:08.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.895 --rc genhtml_branch_coverage=1 00:08:08.895 --rc genhtml_function_coverage=1 00:08:08.895 --rc genhtml_legend=1 00:08:08.895 --rc geninfo_all_blocks=1 00:08:08.895 --rc geninfo_unexecuted_blocks=1 00:08:08.895 00:08:08.895 ' 00:08:08.895 05:48:30 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:08.895 05:48:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.895 05:48:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.895 05:48:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.895 05:48:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.895 05:48:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.895 05:48:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.895 05:48:30 -- paths/export.sh@5 -- # export PATH 00:08:08.895 05:48:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.895 05:48:30 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:08.895 05:48:30 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:08.895 05:48:30 -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:08.895 05:48:30 -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:08.895 05:48:30 -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:08.895 05:48:30 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:08.895 05:48:30 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:08.895 05:48:30 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:08.895 05:48:30 -- dd/sparse.sh@118 -- # prepare 00:08:08.895 05:48:30 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:08.895 05:48:30 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:08.895 1+0 records in 00:08:08.895 1+0 records out 00:08:08.895 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00413046 s, 1.0 GB/s 00:08:08.895 05:48:30 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:08.895 1+0 records in 00:08:08.895 1+0 records out 00:08:08.895 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00622364 s, 674 MB/s 00:08:08.895 05:48:30 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:08.895 1+0 records in 00:08:08.895 1+0 records out 00:08:08.895 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00662826 s, 633 MB/s 00:08:08.895 05:48:30 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:08.895 05:48:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:08.895 05:48:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.895 05:48:30 -- common/autotest_common.sh@10 -- # set +x 00:08:08.895 ************************************ 00:08:08.895 START TEST dd_sparse_file_to_file 00:08:08.895 ************************************ 00:08:08.895 05:48:30 -- common/autotest_common.sh@1114 -- # file_to_file 00:08:08.895 05:48:30 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:08.895 05:48:30 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:08.895 05:48:30 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:08.895 05:48:30 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:08.895 05:48:30 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:08.895 05:48:30 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:08.895 05:48:30 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:08.895 05:48:30 -- dd/sparse.sh@41 -- # gen_conf 00:08:08.895 05:48:30 -- dd/common.sh@31 -- # xtrace_disable 00:08:08.895 05:48:30 -- common/autotest_common.sh@10 -- # set +x 00:08:09.154 [2024-12-15 05:48:30.544331] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:09.154 [2024-12-15 05:48:30.544413] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70999 ] 00:08:09.154 { 00:08:09.154 "subsystems": [ 00:08:09.154 { 00:08:09.154 "subsystem": "bdev", 00:08:09.154 "config": [ 00:08:09.154 { 00:08:09.154 "params": { 00:08:09.154 "block_size": 4096, 00:08:09.154 "filename": "dd_sparse_aio_disk", 00:08:09.154 "name": "dd_aio" 00:08:09.154 }, 00:08:09.154 "method": "bdev_aio_create" 00:08:09.154 }, 00:08:09.154 { 00:08:09.154 "params": { 00:08:09.154 "lvs_name": "dd_lvstore", 00:08:09.154 "bdev_name": "dd_aio" 00:08:09.154 }, 00:08:09.154 "method": "bdev_lvol_create_lvstore" 00:08:09.154 }, 00:08:09.154 { 00:08:09.154 "method": "bdev_wait_for_examine" 00:08:09.154 } 00:08:09.154 ] 00:08:09.154 } 00:08:09.154 ] 00:08:09.154 } 00:08:09.154 [2024-12-15 05:48:30.680258] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.154 [2024-12-15 05:48:30.710939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.414  [2024-12-15T05:48:31.055Z] Copying: 12/36 [MB] (average 1500 MBps) 00:08:09.414 00:08:09.414 05:48:30 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:09.414 05:48:30 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:09.414 05:48:30 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:09.414 05:48:30 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:09.414 05:48:30 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:09.414 05:48:30 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:09.414 05:48:30 -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:09.414 05:48:30 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:09.414 05:48:30 -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:09.414 05:48:30 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:09.414 00:08:09.414 real 0m0.495s 00:08:09.414 user 0m0.271s 00:08:09.414 sys 0m0.132s 00:08:09.414 05:48:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:09.414 05:48:30 -- common/autotest_common.sh@10 -- # set +x 00:08:09.414 ************************************ 00:08:09.414 END TEST dd_sparse_file_to_file 00:08:09.414 ************************************ 00:08:09.414 05:48:31 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:09.414 05:48:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:09.414 05:48:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:09.414 05:48:31 -- common/autotest_common.sh@10 -- # set +x 00:08:09.414 ************************************ 00:08:09.414 START TEST dd_sparse_file_to_bdev 00:08:09.414 ************************************ 00:08:09.414 05:48:31 -- common/autotest_common.sh@1114 -- # file_to_bdev 00:08:09.414 05:48:31 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:09.414 05:48:31 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:09.414 05:48:31 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:08:09.414 05:48:31 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:09.414 05:48:31 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:09.414 05:48:31 -- dd/sparse.sh@73 -- # gen_conf 00:08:09.414 05:48:31 -- dd/common.sh@31 -- # xtrace_disable 00:08:09.414 05:48:31 -- common/autotest_common.sh@10 -- # set +x 00:08:09.674 [2024-12-15 05:48:31.086366] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:09.674 [2024-12-15 05:48:31.086456] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71039 ] 00:08:09.674 { 00:08:09.674 "subsystems": [ 00:08:09.674 { 00:08:09.674 "subsystem": "bdev", 00:08:09.674 "config": [ 00:08:09.674 { 00:08:09.674 "params": { 00:08:09.674 "block_size": 4096, 00:08:09.674 "filename": "dd_sparse_aio_disk", 00:08:09.674 "name": "dd_aio" 00:08:09.674 }, 00:08:09.674 "method": "bdev_aio_create" 00:08:09.674 }, 00:08:09.674 { 00:08:09.674 "params": { 00:08:09.674 "lvs_name": "dd_lvstore", 00:08:09.674 "lvol_name": "dd_lvol", 00:08:09.674 "size": 37748736, 00:08:09.674 "thin_provision": true 00:08:09.674 }, 00:08:09.674 "method": "bdev_lvol_create" 00:08:09.674 }, 00:08:09.674 { 00:08:09.674 "method": "bdev_wait_for_examine" 00:08:09.674 } 00:08:09.674 ] 00:08:09.674 } 00:08:09.674 ] 00:08:09.674 } 00:08:09.674 [2024-12-15 05:48:31.214854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.674 [2024-12-15 05:48:31.247390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.674 [2024-12-15 05:48:31.308939] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:08:09.933  [2024-12-15T05:48:31.574Z] Copying: 12/36 [MB] (average 521 MBps)[2024-12-15 05:48:31.348842] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:08:09.933 00:08:09.934 00:08:09.934 00:08:09.934 real 0m0.479s 00:08:09.934 user 0m0.287s 00:08:09.934 sys 0m0.121s 00:08:09.934 05:48:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:09.934 05:48:31 -- common/autotest_common.sh@10 -- # set +x 00:08:09.934 ************************************ 00:08:09.934 END TEST dd_sparse_file_to_bdev 00:08:09.934 ************************************ 00:08:09.934 05:48:31 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:09.934 05:48:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:09.934 05:48:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:09.934 05:48:31 -- common/autotest_common.sh@10 -- # set +x 00:08:10.192 ************************************ 00:08:10.192 START TEST dd_sparse_bdev_to_file 00:08:10.192 ************************************ 00:08:10.192 05:48:31 -- common/autotest_common.sh@1114 -- # bdev_to_file 00:08:10.192 05:48:31 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:10.192 05:48:31 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:10.192 05:48:31 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:10.192 05:48:31 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:10.193 05:48:31 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:10.193 05:48:31 -- dd/sparse.sh@91 -- # gen_conf 00:08:10.193 05:48:31 -- dd/common.sh@31 -- # xtrace_disable 00:08:10.193 05:48:31 -- common/autotest_common.sh@10 -- # set +x 00:08:10.193 [2024-12-15 05:48:31.624261] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:10.193 [2024-12-15 05:48:31.624373] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71071 ] 00:08:10.193 { 00:08:10.193 "subsystems": [ 00:08:10.193 { 00:08:10.193 "subsystem": "bdev", 00:08:10.193 "config": [ 00:08:10.193 { 00:08:10.193 "params": { 00:08:10.193 "block_size": 4096, 00:08:10.193 "filename": "dd_sparse_aio_disk", 00:08:10.193 "name": "dd_aio" 00:08:10.193 }, 00:08:10.193 "method": "bdev_aio_create" 00:08:10.193 }, 00:08:10.193 { 00:08:10.193 "method": "bdev_wait_for_examine" 00:08:10.193 } 00:08:10.193 ] 00:08:10.193 } 00:08:10.193 ] 00:08:10.193 } 00:08:10.193 [2024-12-15 05:48:31.760326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.193 [2024-12-15 05:48:31.790677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.452  [2024-12-15T05:48:32.093Z] Copying: 12/36 [MB] (average 1500 MBps) 00:08:10.452 00:08:10.452 05:48:32 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:10.452 05:48:32 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:10.452 05:48:32 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:10.452 05:48:32 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:10.452 05:48:32 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:10.452 05:48:32 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:10.452 05:48:32 -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:10.452 05:48:32 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:10.452 05:48:32 -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:10.452 05:48:32 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:10.452 00:08:10.452 real 0m0.490s 00:08:10.452 user 0m0.296s 00:08:10.452 sys 0m0.118s 00:08:10.452 05:48:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:10.452 05:48:32 -- common/autotest_common.sh@10 -- # set +x 00:08:10.452 ************************************ 00:08:10.452 END TEST dd_sparse_bdev_to_file 00:08:10.452 ************************************ 00:08:10.711 05:48:32 -- dd/sparse.sh@1 -- # cleanup 00:08:10.711 05:48:32 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:10.711 05:48:32 -- dd/sparse.sh@12 -- # rm file_zero1 00:08:10.711 05:48:32 -- dd/sparse.sh@13 -- # rm file_zero2 00:08:10.711 05:48:32 -- dd/sparse.sh@14 -- # rm file_zero3 00:08:10.711 00:08:10.711 real 0m1.855s 00:08:10.711 user 0m1.028s 00:08:10.711 sys 0m0.581s 00:08:10.711 05:48:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:10.711 05:48:32 -- common/autotest_common.sh@10 -- # set +x 00:08:10.711 ************************************ 00:08:10.711 END TEST spdk_dd_sparse 00:08:10.711 ************************************ 00:08:10.711 05:48:32 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:10.711 05:48:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:10.711 05:48:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:10.711 05:48:32 -- common/autotest_common.sh@10 -- # set +x 00:08:10.711 ************************************ 00:08:10.711 START TEST spdk_dd_negative 00:08:10.711 ************************************ 00:08:10.711 05:48:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:10.711 * Looking for test storage... 00:08:10.711 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:10.711 05:48:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:10.711 05:48:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:10.711 05:48:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:10.711 05:48:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:10.711 05:48:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:10.711 05:48:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:10.711 05:48:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:10.711 05:48:32 -- scripts/common.sh@335 -- # IFS=.-: 00:08:10.711 05:48:32 -- scripts/common.sh@335 -- # read -ra ver1 00:08:10.711 05:48:32 -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.711 05:48:32 -- scripts/common.sh@336 -- # read -ra ver2 00:08:10.711 05:48:32 -- scripts/common.sh@337 -- # local 'op=<' 00:08:10.711 05:48:32 -- scripts/common.sh@339 -- # ver1_l=2 00:08:10.711 05:48:32 -- scripts/common.sh@340 -- # ver2_l=1 00:08:10.711 05:48:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:10.711 05:48:32 -- scripts/common.sh@343 -- # case "$op" in 00:08:10.711 05:48:32 -- scripts/common.sh@344 -- # : 1 00:08:10.711 05:48:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:10.711 05:48:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.711 05:48:32 -- scripts/common.sh@364 -- # decimal 1 00:08:10.711 05:48:32 -- scripts/common.sh@352 -- # local d=1 00:08:10.711 05:48:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.711 05:48:32 -- scripts/common.sh@354 -- # echo 1 00:08:10.711 05:48:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:10.711 05:48:32 -- scripts/common.sh@365 -- # decimal 2 00:08:10.971 05:48:32 -- scripts/common.sh@352 -- # local d=2 00:08:10.971 05:48:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.971 05:48:32 -- scripts/common.sh@354 -- # echo 2 00:08:10.971 05:48:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:10.971 05:48:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:10.971 05:48:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:10.971 05:48:32 -- scripts/common.sh@367 -- # return 0 00:08:10.971 05:48:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.971 05:48:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:10.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.971 --rc genhtml_branch_coverage=1 00:08:10.971 --rc genhtml_function_coverage=1 00:08:10.971 --rc genhtml_legend=1 00:08:10.971 --rc geninfo_all_blocks=1 00:08:10.971 --rc geninfo_unexecuted_blocks=1 00:08:10.971 00:08:10.971 ' 00:08:10.971 05:48:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:10.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.971 --rc genhtml_branch_coverage=1 00:08:10.971 --rc genhtml_function_coverage=1 00:08:10.971 --rc genhtml_legend=1 00:08:10.971 --rc geninfo_all_blocks=1 00:08:10.971 --rc geninfo_unexecuted_blocks=1 00:08:10.971 00:08:10.971 ' 00:08:10.971 05:48:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:10.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.971 --rc genhtml_branch_coverage=1 00:08:10.971 --rc genhtml_function_coverage=1 00:08:10.971 --rc genhtml_legend=1 00:08:10.971 --rc geninfo_all_blocks=1 00:08:10.971 --rc geninfo_unexecuted_blocks=1 00:08:10.971 00:08:10.971 ' 00:08:10.971 05:48:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:10.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.971 --rc genhtml_branch_coverage=1 00:08:10.971 --rc genhtml_function_coverage=1 00:08:10.971 --rc genhtml_legend=1 00:08:10.971 --rc geninfo_all_blocks=1 00:08:10.971 --rc geninfo_unexecuted_blocks=1 00:08:10.971 00:08:10.971 ' 00:08:10.971 05:48:32 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:10.971 05:48:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.971 05:48:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.971 05:48:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.971 05:48:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.971 05:48:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.971 05:48:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.971 05:48:32 -- paths/export.sh@5 -- # export PATH 00:08:10.971 05:48:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.971 05:48:32 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:10.971 05:48:32 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:10.971 05:48:32 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:10.971 05:48:32 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:10.971 05:48:32 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:08:10.971 05:48:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:10.971 05:48:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:10.971 05:48:32 -- common/autotest_common.sh@10 -- # set +x 00:08:10.971 ************************************ 00:08:10.971 START TEST dd_invalid_arguments 00:08:10.971 ************************************ 00:08:10.971 05:48:32 -- common/autotest_common.sh@1114 -- # invalid_arguments 00:08:10.971 05:48:32 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:10.971 05:48:32 -- common/autotest_common.sh@650 -- # local es=0 00:08:10.971 05:48:32 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:10.971 05:48:32 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.971 05:48:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.971 05:48:32 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.971 05:48:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.971 05:48:32 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.971 05:48:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.971 05:48:32 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.971 05:48:32 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:10.971 05:48:32 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:10.971 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:10.971 options: 00:08:10.971 -c, --config JSON config file (default none) 00:08:10.971 --json JSON config file (default none) 00:08:10.971 --json-ignore-init-errors 00:08:10.971 don't exit on invalid config entry 00:08:10.971 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:10.971 -g, --single-file-segments 00:08:10.971 force creating just one hugetlbfs file 00:08:10.971 -h, --help show this usage 00:08:10.971 -i, --shm-id shared memory ID (optional) 00:08:10.971 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:08:10.971 --lcores lcore to CPU mapping list. The list is in the format: 00:08:10.971 [<,lcores[@CPUs]>...] 00:08:10.971 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:10.971 Within the group, '-' is used for range separator, 00:08:10.971 ',' is used for single number separator. 00:08:10.971 '( )' can be omitted for single element group, 00:08:10.971 '@' can be omitted if cpus and lcores have the same value 00:08:10.971 -n, --mem-channels channel number of memory channels used for DPDK 00:08:10.971 -p, --main-core main (primary) core for DPDK 00:08:10.971 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:10.971 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:10.972 --disable-cpumask-locks Disable CPU core lock files. 00:08:10.972 --silence-noticelog disable notice level logging to stderr 00:08:10.972 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:10.972 -u, --no-pci disable PCI access 00:08:10.972 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:10.972 --max-delay maximum reactor delay (in microseconds) 00:08:10.972 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:10.972 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:10.972 -R, --huge-unlink unlink huge files after initialization 00:08:10.972 -v, --version print SPDK version 00:08:10.972 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:10.972 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:10.972 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:10.972 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:08:10.972 Tracepoints vary in size and can use more than one trace entry. 00:08:10.972 --rpcs-allowed comma-separated list of permitted RPCS 00:08:10.972 --env-context Opaque context for use of the env implementation 00:08:10.972 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:10.972 --no-huge run without using hugepages 00:08:10.972 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:08:10.972 -e, --tpoint-group [:] 00:08:10.972 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:08:10.972 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:08:10.972 Groups and masks /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:10.972 [2024-12-15 05:48:32.421489] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:08:10.972 can be combined (e.g. thread,bdev:0x1). 00:08:10.972 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:08:10.972 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:08:10.972 [--------- DD Options ---------] 00:08:10.972 --if Input file. Must specify either --if or --ib. 00:08:10.972 --ib Input bdev. Must specifier either --if or --ib 00:08:10.972 --of Output file. Must specify either --of or --ob. 00:08:10.972 --ob Output bdev. Must specify either --of or --ob. 00:08:10.972 --iflag Input file flags. 00:08:10.972 --oflag Output file flags. 00:08:10.972 --bs I/O unit size (default: 4096) 00:08:10.972 --qd Queue depth (default: 2) 00:08:10.972 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:10.972 --skip Skip this many I/O units at start of input. (default: 0) 00:08:10.972 --seek Skip this many I/O units at start of output. (default: 0) 00:08:10.972 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:10.972 --sparse Enable hole skipping in input target 00:08:10.972 Available iflag and oflag values: 00:08:10.972 append - append mode 00:08:10.972 direct - use direct I/O for data 00:08:10.972 directory - fail unless a directory 00:08:10.972 dsync - use synchronized I/O for data 00:08:10.972 noatime - do not update access time 00:08:10.972 noctty - do not assign controlling terminal from file 00:08:10.972 nofollow - do not follow symlinks 00:08:10.972 nonblock - use non-blocking I/O 00:08:10.972 sync - use synchronized I/O for data and metadata 00:08:10.972 05:48:32 -- common/autotest_common.sh@653 -- # es=2 00:08:10.972 05:48:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:10.972 05:48:32 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:10.972 05:48:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:10.972 00:08:10.972 real 0m0.065s 00:08:10.972 user 0m0.038s 00:08:10.972 sys 0m0.025s 00:08:10.972 05:48:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:10.972 05:48:32 -- common/autotest_common.sh@10 -- # set +x 00:08:10.972 ************************************ 00:08:10.972 END TEST dd_invalid_arguments 00:08:10.972 ************************************ 00:08:10.972 05:48:32 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:08:10.972 05:48:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:10.972 05:48:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:10.972 05:48:32 -- common/autotest_common.sh@10 -- # set +x 00:08:10.972 ************************************ 00:08:10.972 START TEST dd_double_input 00:08:10.972 ************************************ 00:08:10.972 05:48:32 -- common/autotest_common.sh@1114 -- # double_input 00:08:10.972 05:48:32 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:10.972 05:48:32 -- common/autotest_common.sh@650 -- # local es=0 00:08:10.972 05:48:32 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:10.972 05:48:32 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.972 05:48:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.972 05:48:32 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.972 05:48:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.972 05:48:32 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.972 05:48:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.972 05:48:32 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.972 05:48:32 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:10.972 05:48:32 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:10.972 [2024-12-15 05:48:32.531160] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:10.972 05:48:32 -- common/autotest_common.sh@653 -- # es=22 00:08:10.972 05:48:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:10.972 05:48:32 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:10.972 05:48:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:10.972 00:08:10.972 real 0m0.063s 00:08:10.972 user 0m0.045s 00:08:10.972 sys 0m0.017s 00:08:10.972 05:48:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:10.972 05:48:32 -- common/autotest_common.sh@10 -- # set +x 00:08:10.972 ************************************ 00:08:10.972 END TEST dd_double_input 00:08:10.972 ************************************ 00:08:10.972 05:48:32 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:08:10.972 05:48:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:10.972 05:48:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:10.972 05:48:32 -- common/autotest_common.sh@10 -- # set +x 00:08:10.972 ************************************ 00:08:10.972 START TEST dd_double_output 00:08:10.972 ************************************ 00:08:10.972 05:48:32 -- common/autotest_common.sh@1114 -- # double_output 00:08:10.972 05:48:32 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:10.972 05:48:32 -- common/autotest_common.sh@650 -- # local es=0 00:08:10.972 05:48:32 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:10.972 05:48:32 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.972 05:48:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.972 05:48:32 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.972 05:48:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.972 05:48:32 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.972 05:48:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.972 05:48:32 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.972 05:48:32 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:10.972 05:48:32 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:11.232 [2024-12-15 05:48:32.645091] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:11.232 05:48:32 -- common/autotest_common.sh@653 -- # es=22 00:08:11.232 05:48:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:11.232 05:48:32 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:11.232 05:48:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:11.232 00:08:11.232 real 0m0.061s 00:08:11.232 user 0m0.041s 00:08:11.232 sys 0m0.019s 00:08:11.232 05:48:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:11.232 05:48:32 -- common/autotest_common.sh@10 -- # set +x 00:08:11.232 ************************************ 00:08:11.232 END TEST dd_double_output 00:08:11.232 ************************************ 00:08:11.232 05:48:32 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:08:11.232 05:48:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:11.232 05:48:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:11.232 05:48:32 -- common/autotest_common.sh@10 -- # set +x 00:08:11.232 ************************************ 00:08:11.232 START TEST dd_no_input 00:08:11.232 ************************************ 00:08:11.232 05:48:32 -- common/autotest_common.sh@1114 -- # no_input 00:08:11.232 05:48:32 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:11.232 05:48:32 -- common/autotest_common.sh@650 -- # local es=0 00:08:11.232 05:48:32 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:11.232 05:48:32 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.232 05:48:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.232 05:48:32 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.232 05:48:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.232 05:48:32 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.232 05:48:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.232 05:48:32 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.232 05:48:32 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:11.232 05:48:32 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:11.232 [2024-12-15 05:48:32.759362] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:08:11.232 05:48:32 -- common/autotest_common.sh@653 -- # es=22 00:08:11.232 05:48:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:11.232 05:48:32 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:11.232 05:48:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:11.232 00:08:11.232 real 0m0.064s 00:08:11.232 user 0m0.039s 00:08:11.232 sys 0m0.024s 00:08:11.232 05:48:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:11.232 05:48:32 -- common/autotest_common.sh@10 -- # set +x 00:08:11.232 ************************************ 00:08:11.232 END TEST dd_no_input 00:08:11.232 ************************************ 00:08:11.232 05:48:32 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:08:11.232 05:48:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:11.232 05:48:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:11.232 05:48:32 -- common/autotest_common.sh@10 -- # set +x 00:08:11.232 ************************************ 00:08:11.232 START TEST dd_no_output 00:08:11.232 ************************************ 00:08:11.232 05:48:32 -- common/autotest_common.sh@1114 -- # no_output 00:08:11.232 05:48:32 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:11.232 05:48:32 -- common/autotest_common.sh@650 -- # local es=0 00:08:11.232 05:48:32 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:11.232 05:48:32 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.232 05:48:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.232 05:48:32 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.232 05:48:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.232 05:48:32 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.232 05:48:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.232 05:48:32 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.232 05:48:32 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:11.232 05:48:32 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:11.493 [2024-12-15 05:48:32.875498] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:08:11.494 05:48:32 -- common/autotest_common.sh@653 -- # es=22 00:08:11.494 05:48:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:11.494 05:48:32 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:11.494 05:48:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:11.494 00:08:11.494 real 0m0.065s 00:08:11.494 user 0m0.042s 00:08:11.494 sys 0m0.022s 00:08:11.494 05:48:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:11.494 ************************************ 00:08:11.494 END TEST dd_no_output 00:08:11.494 05:48:32 -- common/autotest_common.sh@10 -- # set +x 00:08:11.494 ************************************ 00:08:11.494 05:48:32 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:11.494 05:48:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:11.494 05:48:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:11.494 05:48:32 -- common/autotest_common.sh@10 -- # set +x 00:08:11.494 ************************************ 00:08:11.494 START TEST dd_wrong_blocksize 00:08:11.494 ************************************ 00:08:11.494 05:48:32 -- common/autotest_common.sh@1114 -- # wrong_blocksize 00:08:11.494 05:48:32 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:11.494 05:48:32 -- common/autotest_common.sh@650 -- # local es=0 00:08:11.494 05:48:32 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:11.494 05:48:32 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.494 05:48:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.494 05:48:32 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.494 05:48:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.494 05:48:32 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.494 05:48:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.494 05:48:32 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.494 05:48:32 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:11.494 05:48:32 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:11.494 [2024-12-15 05:48:32.991293] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:08:11.494 05:48:33 -- common/autotest_common.sh@653 -- # es=22 00:08:11.494 05:48:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:11.494 05:48:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:11.494 05:48:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:11.494 00:08:11.494 real 0m0.065s 00:08:11.494 user 0m0.037s 00:08:11.494 sys 0m0.026s 00:08:11.494 05:48:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:11.494 05:48:33 -- common/autotest_common.sh@10 -- # set +x 00:08:11.494 ************************************ 00:08:11.494 END TEST dd_wrong_blocksize 00:08:11.494 ************************************ 00:08:11.494 05:48:33 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:11.494 05:48:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:11.494 05:48:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:11.494 05:48:33 -- common/autotest_common.sh@10 -- # set +x 00:08:11.494 ************************************ 00:08:11.494 START TEST dd_smaller_blocksize 00:08:11.494 ************************************ 00:08:11.494 05:48:33 -- common/autotest_common.sh@1114 -- # smaller_blocksize 00:08:11.494 05:48:33 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:11.494 05:48:33 -- common/autotest_common.sh@650 -- # local es=0 00:08:11.494 05:48:33 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:11.494 05:48:33 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.494 05:48:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.494 05:48:33 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.494 05:48:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.494 05:48:33 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.494 05:48:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.494 05:48:33 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.494 05:48:33 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:11.494 05:48:33 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:11.494 [2024-12-15 05:48:33.108162] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:11.494 [2024-12-15 05:48:33.108263] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71294 ] 00:08:11.753 [2024-12-15 05:48:33.247006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.753 [2024-12-15 05:48:33.287005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.753 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:11.753 [2024-12-15 05:48:33.338537] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:11.753 [2024-12-15 05:48:33.338571] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:12.013 [2024-12-15 05:48:33.403866] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:12.013 05:48:33 -- common/autotest_common.sh@653 -- # es=244 00:08:12.013 05:48:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:12.013 05:48:33 -- common/autotest_common.sh@662 -- # es=116 00:08:12.013 05:48:33 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:12.013 05:48:33 -- common/autotest_common.sh@670 -- # es=1 00:08:12.013 05:48:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:12.013 00:08:12.013 real 0m0.416s 00:08:12.013 user 0m0.208s 00:08:12.013 sys 0m0.104s 00:08:12.013 05:48:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:12.013 ************************************ 00:08:12.013 END TEST dd_smaller_blocksize 00:08:12.013 ************************************ 00:08:12.013 05:48:33 -- common/autotest_common.sh@10 -- # set +x 00:08:12.013 05:48:33 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:08:12.013 05:48:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:12.013 05:48:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:12.013 05:48:33 -- common/autotest_common.sh@10 -- # set +x 00:08:12.013 ************************************ 00:08:12.013 START TEST dd_invalid_count 00:08:12.013 ************************************ 00:08:12.013 05:48:33 -- common/autotest_common.sh@1114 -- # invalid_count 00:08:12.013 05:48:33 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:12.013 05:48:33 -- common/autotest_common.sh@650 -- # local es=0 00:08:12.013 05:48:33 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:12.013 05:48:33 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.013 05:48:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.013 05:48:33 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.013 05:48:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.013 05:48:33 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.013 05:48:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.013 05:48:33 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.013 05:48:33 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:12.013 05:48:33 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:12.013 [2024-12-15 05:48:33.580240] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:08:12.013 05:48:33 -- common/autotest_common.sh@653 -- # es=22 00:08:12.013 05:48:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:12.013 05:48:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:12.013 05:48:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:12.013 00:08:12.013 real 0m0.066s 00:08:12.013 user 0m0.042s 00:08:12.013 sys 0m0.024s 00:08:12.013 05:48:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:12.013 05:48:33 -- common/autotest_common.sh@10 -- # set +x 00:08:12.013 ************************************ 00:08:12.013 END TEST dd_invalid_count 00:08:12.013 ************************************ 00:08:12.013 05:48:33 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:08:12.013 05:48:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:12.013 05:48:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:12.013 05:48:33 -- common/autotest_common.sh@10 -- # set +x 00:08:12.013 ************************************ 00:08:12.013 START TEST dd_invalid_oflag 00:08:12.013 ************************************ 00:08:12.013 05:48:33 -- common/autotest_common.sh@1114 -- # invalid_oflag 00:08:12.013 05:48:33 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:12.013 05:48:33 -- common/autotest_common.sh@650 -- # local es=0 00:08:12.273 05:48:33 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:12.273 05:48:33 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.273 05:48:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.273 05:48:33 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.273 05:48:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.273 05:48:33 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.273 05:48:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.273 05:48:33 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.273 05:48:33 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:12.273 05:48:33 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:12.273 [2024-12-15 05:48:33.701417] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:08:12.273 05:48:33 -- common/autotest_common.sh@653 -- # es=22 00:08:12.273 05:48:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:12.273 05:48:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:12.273 05:48:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:12.273 00:08:12.273 real 0m0.067s 00:08:12.273 user 0m0.042s 00:08:12.273 sys 0m0.024s 00:08:12.273 05:48:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:12.273 ************************************ 00:08:12.273 END TEST dd_invalid_oflag 00:08:12.273 ************************************ 00:08:12.273 05:48:33 -- common/autotest_common.sh@10 -- # set +x 00:08:12.273 05:48:33 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:08:12.273 05:48:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:12.273 05:48:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:12.273 05:48:33 -- common/autotest_common.sh@10 -- # set +x 00:08:12.273 ************************************ 00:08:12.273 START TEST dd_invalid_iflag 00:08:12.273 ************************************ 00:08:12.273 05:48:33 -- common/autotest_common.sh@1114 -- # invalid_iflag 00:08:12.273 05:48:33 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:12.273 05:48:33 -- common/autotest_common.sh@650 -- # local es=0 00:08:12.273 05:48:33 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:12.273 05:48:33 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.273 05:48:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.273 05:48:33 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.273 05:48:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.273 05:48:33 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.273 05:48:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.273 05:48:33 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.273 05:48:33 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:12.273 05:48:33 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:12.273 [2024-12-15 05:48:33.817672] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:08:12.273 05:48:33 -- common/autotest_common.sh@653 -- # es=22 00:08:12.273 05:48:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:12.273 05:48:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:12.273 05:48:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:12.273 00:08:12.273 real 0m0.067s 00:08:12.273 user 0m0.042s 00:08:12.273 sys 0m0.024s 00:08:12.273 05:48:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:12.273 ************************************ 00:08:12.273 END TEST dd_invalid_iflag 00:08:12.273 ************************************ 00:08:12.273 05:48:33 -- common/autotest_common.sh@10 -- # set +x 00:08:12.273 05:48:33 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:08:12.273 05:48:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:12.273 05:48:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:12.273 05:48:33 -- common/autotest_common.sh@10 -- # set +x 00:08:12.273 ************************************ 00:08:12.273 START TEST dd_unknown_flag 00:08:12.273 ************************************ 00:08:12.273 05:48:33 -- common/autotest_common.sh@1114 -- # unknown_flag 00:08:12.273 05:48:33 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:12.273 05:48:33 -- common/autotest_common.sh@650 -- # local es=0 00:08:12.273 05:48:33 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:12.273 05:48:33 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.273 05:48:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.273 05:48:33 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.273 05:48:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.273 05:48:33 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.273 05:48:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.273 05:48:33 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.273 05:48:33 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:12.273 05:48:33 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:12.533 [2024-12-15 05:48:33.934577] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:12.533 [2024-12-15 05:48:33.934828] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71386 ] 00:08:12.533 [2024-12-15 05:48:34.074184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.533 [2024-12-15 05:48:34.113707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.533 [2024-12-15 05:48:34.164178] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:08:12.533 [2024-12-15 05:48:34.164254] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:08:12.533 [2024-12-15 05:48:34.164270] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:08:12.533 [2024-12-15 05:48:34.164284] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:12.792 [2024-12-15 05:48:34.231232] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:12.792 05:48:34 -- common/autotest_common.sh@653 -- # es=236 00:08:12.792 05:48:34 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:12.792 05:48:34 -- common/autotest_common.sh@662 -- # es=108 00:08:12.792 05:48:34 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:12.792 05:48:34 -- common/autotest_common.sh@670 -- # es=1 00:08:12.792 05:48:34 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:12.792 00:08:12.792 real 0m0.435s 00:08:12.792 user 0m0.224s 00:08:12.792 sys 0m0.106s 00:08:12.792 05:48:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:12.792 05:48:34 -- common/autotest_common.sh@10 -- # set +x 00:08:12.792 ************************************ 00:08:12.792 END TEST dd_unknown_flag 00:08:12.792 ************************************ 00:08:12.792 05:48:34 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:08:12.792 05:48:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:12.793 05:48:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:12.793 05:48:34 -- common/autotest_common.sh@10 -- # set +x 00:08:12.793 ************************************ 00:08:12.793 START TEST dd_invalid_json 00:08:12.793 ************************************ 00:08:12.793 05:48:34 -- common/autotest_common.sh@1114 -- # invalid_json 00:08:12.793 05:48:34 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:12.793 05:48:34 -- common/autotest_common.sh@650 -- # local es=0 00:08:12.793 05:48:34 -- dd/negative_dd.sh@95 -- # : 00:08:12.793 05:48:34 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:12.793 05:48:34 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.793 05:48:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.793 05:48:34 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.793 05:48:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.793 05:48:34 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.793 05:48:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.793 05:48:34 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.793 05:48:34 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:12.793 05:48:34 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:12.793 [2024-12-15 05:48:34.424230] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:12.793 [2024-12-15 05:48:34.424336] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71414 ] 00:08:13.052 [2024-12-15 05:48:34.563162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.052 [2024-12-15 05:48:34.602833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.052 [2024-12-15 05:48:34.602985] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:08:13.052 [2024-12-15 05:48:34.603009] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:13.052 [2024-12-15 05:48:34.603055] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:13.052 05:48:34 -- common/autotest_common.sh@653 -- # es=234 00:08:13.052 05:48:34 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:13.052 05:48:34 -- common/autotest_common.sh@662 -- # es=106 00:08:13.052 05:48:34 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:13.052 05:48:34 -- common/autotest_common.sh@670 -- # es=1 00:08:13.052 05:48:34 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:13.052 00:08:13.052 real 0m0.296s 00:08:13.052 user 0m0.134s 00:08:13.052 sys 0m0.061s 00:08:13.052 05:48:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:13.052 ************************************ 00:08:13.052 END TEST dd_invalid_json 00:08:13.052 ************************************ 00:08:13.052 05:48:34 -- common/autotest_common.sh@10 -- # set +x 00:08:13.311 00:08:13.311 real 0m2.532s 00:08:13.311 user 0m1.243s 00:08:13.311 sys 0m0.922s 00:08:13.311 05:48:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:13.311 ************************************ 00:08:13.311 END TEST spdk_dd_negative 00:08:13.311 ************************************ 00:08:13.311 05:48:34 -- common/autotest_common.sh@10 -- # set +x 00:08:13.311 ************************************ 00:08:13.311 END TEST spdk_dd 00:08:13.311 ************************************ 00:08:13.311 00:08:13.311 real 1m0.892s 00:08:13.311 user 0m36.590s 00:08:13.311 sys 0m15.231s 00:08:13.311 05:48:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:13.311 05:48:34 -- common/autotest_common.sh@10 -- # set +x 00:08:13.311 05:48:34 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:08:13.311 05:48:34 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:08:13.311 05:48:34 -- spdk/autotest.sh@255 -- # timing_exit lib 00:08:13.311 05:48:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:13.311 05:48:34 -- common/autotest_common.sh@10 -- # set +x 00:08:13.311 05:48:34 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:08:13.311 05:48:34 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:08:13.311 05:48:34 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:08:13.311 05:48:34 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:08:13.311 05:48:34 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:08:13.311 05:48:34 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:08:13.311 05:48:34 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:13.311 05:48:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:13.311 05:48:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.311 05:48:34 -- common/autotest_common.sh@10 -- # set +x 00:08:13.311 ************************************ 00:08:13.311 START TEST nvmf_tcp 00:08:13.311 ************************************ 00:08:13.311 05:48:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:13.311 * Looking for test storage... 00:08:13.311 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:13.311 05:48:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:13.311 05:48:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:13.311 05:48:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:13.571 05:48:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:13.571 05:48:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:13.571 05:48:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:13.571 05:48:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:13.571 05:48:34 -- scripts/common.sh@335 -- # IFS=.-: 00:08:13.571 05:48:34 -- scripts/common.sh@335 -- # read -ra ver1 00:08:13.571 05:48:34 -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.571 05:48:34 -- scripts/common.sh@336 -- # read -ra ver2 00:08:13.571 05:48:34 -- scripts/common.sh@337 -- # local 'op=<' 00:08:13.571 05:48:34 -- scripts/common.sh@339 -- # ver1_l=2 00:08:13.571 05:48:35 -- scripts/common.sh@340 -- # ver2_l=1 00:08:13.571 05:48:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:13.571 05:48:35 -- scripts/common.sh@343 -- # case "$op" in 00:08:13.571 05:48:35 -- scripts/common.sh@344 -- # : 1 00:08:13.571 05:48:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:13.571 05:48:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.571 05:48:35 -- scripts/common.sh@364 -- # decimal 1 00:08:13.571 05:48:35 -- scripts/common.sh@352 -- # local d=1 00:08:13.571 05:48:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.571 05:48:35 -- scripts/common.sh@354 -- # echo 1 00:08:13.571 05:48:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:13.571 05:48:35 -- scripts/common.sh@365 -- # decimal 2 00:08:13.571 05:48:35 -- scripts/common.sh@352 -- # local d=2 00:08:13.571 05:48:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.571 05:48:35 -- scripts/common.sh@354 -- # echo 2 00:08:13.571 05:48:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:13.571 05:48:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:13.571 05:48:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:13.571 05:48:35 -- scripts/common.sh@367 -- # return 0 00:08:13.571 05:48:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.571 05:48:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:13.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.571 --rc genhtml_branch_coverage=1 00:08:13.571 --rc genhtml_function_coverage=1 00:08:13.571 --rc genhtml_legend=1 00:08:13.571 --rc geninfo_all_blocks=1 00:08:13.571 --rc geninfo_unexecuted_blocks=1 00:08:13.571 00:08:13.571 ' 00:08:13.572 05:48:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:13.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.572 --rc genhtml_branch_coverage=1 00:08:13.572 --rc genhtml_function_coverage=1 00:08:13.572 --rc genhtml_legend=1 00:08:13.572 --rc geninfo_all_blocks=1 00:08:13.572 --rc geninfo_unexecuted_blocks=1 00:08:13.572 00:08:13.572 ' 00:08:13.572 05:48:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:13.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.572 --rc genhtml_branch_coverage=1 00:08:13.572 --rc genhtml_function_coverage=1 00:08:13.572 --rc genhtml_legend=1 00:08:13.572 --rc geninfo_all_blocks=1 00:08:13.572 --rc geninfo_unexecuted_blocks=1 00:08:13.572 00:08:13.572 ' 00:08:13.572 05:48:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:13.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.572 --rc genhtml_branch_coverage=1 00:08:13.572 --rc genhtml_function_coverage=1 00:08:13.572 --rc genhtml_legend=1 00:08:13.572 --rc geninfo_all_blocks=1 00:08:13.572 --rc geninfo_unexecuted_blocks=1 00:08:13.572 00:08:13.572 ' 00:08:13.572 05:48:35 -- nvmf/nvmf.sh@10 -- # uname -s 00:08:13.572 05:48:35 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:13.572 05:48:35 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:13.572 05:48:35 -- nvmf/common.sh@7 -- # uname -s 00:08:13.572 05:48:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.572 05:48:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.572 05:48:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.572 05:48:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.572 05:48:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.572 05:48:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.572 05:48:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.572 05:48:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.572 05:48:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.572 05:48:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.572 05:48:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:08:13.572 05:48:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:08:13.572 05:48:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.572 05:48:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.572 05:48:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:13.572 05:48:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:13.572 05:48:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.572 05:48:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.572 05:48:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.572 05:48:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.572 05:48:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.572 05:48:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.572 05:48:35 -- paths/export.sh@5 -- # export PATH 00:08:13.572 05:48:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.572 05:48:35 -- nvmf/common.sh@46 -- # : 0 00:08:13.572 05:48:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:13.572 05:48:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:13.572 05:48:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:13.572 05:48:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.572 05:48:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.572 05:48:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:13.572 05:48:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:13.572 05:48:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:13.572 05:48:35 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:13.572 05:48:35 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:13.572 05:48:35 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:13.572 05:48:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:13.572 05:48:35 -- common/autotest_common.sh@10 -- # set +x 00:08:13.572 05:48:35 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:08:13.572 05:48:35 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:13.572 05:48:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:13.572 05:48:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.572 05:48:35 -- common/autotest_common.sh@10 -- # set +x 00:08:13.572 ************************************ 00:08:13.572 START TEST nvmf_host_management 00:08:13.572 ************************************ 00:08:13.572 05:48:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:13.572 * Looking for test storage... 00:08:13.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:13.572 05:48:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:13.572 05:48:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:13.572 05:48:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:13.832 05:48:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:13.832 05:48:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:13.832 05:48:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:13.832 05:48:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:13.832 05:48:35 -- scripts/common.sh@335 -- # IFS=.-: 00:08:13.832 05:48:35 -- scripts/common.sh@335 -- # read -ra ver1 00:08:13.832 05:48:35 -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.832 05:48:35 -- scripts/common.sh@336 -- # read -ra ver2 00:08:13.832 05:48:35 -- scripts/common.sh@337 -- # local 'op=<' 00:08:13.832 05:48:35 -- scripts/common.sh@339 -- # ver1_l=2 00:08:13.832 05:48:35 -- scripts/common.sh@340 -- # ver2_l=1 00:08:13.832 05:48:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:13.832 05:48:35 -- scripts/common.sh@343 -- # case "$op" in 00:08:13.832 05:48:35 -- scripts/common.sh@344 -- # : 1 00:08:13.832 05:48:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:13.832 05:48:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.832 05:48:35 -- scripts/common.sh@364 -- # decimal 1 00:08:13.832 05:48:35 -- scripts/common.sh@352 -- # local d=1 00:08:13.832 05:48:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.832 05:48:35 -- scripts/common.sh@354 -- # echo 1 00:08:13.832 05:48:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:13.832 05:48:35 -- scripts/common.sh@365 -- # decimal 2 00:08:13.832 05:48:35 -- scripts/common.sh@352 -- # local d=2 00:08:13.832 05:48:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.832 05:48:35 -- scripts/common.sh@354 -- # echo 2 00:08:13.832 05:48:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:13.832 05:48:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:13.832 05:48:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:13.832 05:48:35 -- scripts/common.sh@367 -- # return 0 00:08:13.832 05:48:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.832 05:48:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:13.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.832 --rc genhtml_branch_coverage=1 00:08:13.832 --rc genhtml_function_coverage=1 00:08:13.832 --rc genhtml_legend=1 00:08:13.832 --rc geninfo_all_blocks=1 00:08:13.832 --rc geninfo_unexecuted_blocks=1 00:08:13.832 00:08:13.832 ' 00:08:13.832 05:48:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:13.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.832 --rc genhtml_branch_coverage=1 00:08:13.832 --rc genhtml_function_coverage=1 00:08:13.832 --rc genhtml_legend=1 00:08:13.832 --rc geninfo_all_blocks=1 00:08:13.832 --rc geninfo_unexecuted_blocks=1 00:08:13.832 00:08:13.832 ' 00:08:13.832 05:48:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:13.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.832 --rc genhtml_branch_coverage=1 00:08:13.832 --rc genhtml_function_coverage=1 00:08:13.832 --rc genhtml_legend=1 00:08:13.832 --rc geninfo_all_blocks=1 00:08:13.832 --rc geninfo_unexecuted_blocks=1 00:08:13.832 00:08:13.832 ' 00:08:13.832 05:48:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:13.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.832 --rc genhtml_branch_coverage=1 00:08:13.832 --rc genhtml_function_coverage=1 00:08:13.832 --rc genhtml_legend=1 00:08:13.832 --rc geninfo_all_blocks=1 00:08:13.832 --rc geninfo_unexecuted_blocks=1 00:08:13.832 00:08:13.832 ' 00:08:13.832 05:48:35 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:13.832 05:48:35 -- nvmf/common.sh@7 -- # uname -s 00:08:13.832 05:48:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.832 05:48:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.832 05:48:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.832 05:48:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.832 05:48:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.832 05:48:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.832 05:48:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.832 05:48:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.832 05:48:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.832 05:48:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.832 05:48:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:08:13.832 05:48:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:08:13.832 05:48:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.832 05:48:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.832 05:48:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:13.832 05:48:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:13.832 05:48:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.832 05:48:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.832 05:48:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.832 05:48:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.832 05:48:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.832 05:48:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.832 05:48:35 -- paths/export.sh@5 -- # export PATH 00:08:13.832 05:48:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.832 05:48:35 -- nvmf/common.sh@46 -- # : 0 00:08:13.832 05:48:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:13.832 05:48:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:13.832 05:48:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:13.832 05:48:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.832 05:48:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.832 05:48:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:13.832 05:48:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:13.832 05:48:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:13.832 05:48:35 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:13.832 05:48:35 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:13.832 05:48:35 -- target/host_management.sh@104 -- # nvmftestinit 00:08:13.832 05:48:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:13.832 05:48:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.832 05:48:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:13.832 05:48:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:13.832 05:48:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:13.832 05:48:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.832 05:48:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:13.832 05:48:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.832 05:48:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:13.832 05:48:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:13.832 05:48:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:13.832 05:48:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:13.832 05:48:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:13.832 05:48:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:13.832 05:48:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:13.832 05:48:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:13.832 05:48:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:13.832 05:48:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:13.832 05:48:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:13.832 05:48:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:13.832 05:48:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:13.832 05:48:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:13.832 05:48:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:13.832 05:48:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:13.832 05:48:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:13.832 05:48:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:13.832 05:48:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:13.832 Cannot find device "nvmf_init_br" 00:08:13.832 05:48:35 -- nvmf/common.sh@153 -- # true 00:08:13.832 05:48:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:13.832 Cannot find device "nvmf_tgt_br" 00:08:13.832 05:48:35 -- nvmf/common.sh@154 -- # true 00:08:13.832 05:48:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:13.833 Cannot find device "nvmf_tgt_br2" 00:08:13.833 05:48:35 -- nvmf/common.sh@155 -- # true 00:08:13.833 05:48:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:13.833 Cannot find device "nvmf_init_br" 00:08:13.833 05:48:35 -- nvmf/common.sh@156 -- # true 00:08:13.833 05:48:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:13.833 Cannot find device "nvmf_tgt_br" 00:08:13.833 05:48:35 -- nvmf/common.sh@157 -- # true 00:08:13.833 05:48:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:13.833 Cannot find device "nvmf_tgt_br2" 00:08:13.833 05:48:35 -- nvmf/common.sh@158 -- # true 00:08:13.833 05:48:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:13.833 Cannot find device "nvmf_br" 00:08:13.833 05:48:35 -- nvmf/common.sh@159 -- # true 00:08:13.833 05:48:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:13.833 Cannot find device "nvmf_init_if" 00:08:13.833 05:48:35 -- nvmf/common.sh@160 -- # true 00:08:13.833 05:48:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:13.833 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:13.833 05:48:35 -- nvmf/common.sh@161 -- # true 00:08:13.833 05:48:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:13.833 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:13.833 05:48:35 -- nvmf/common.sh@162 -- # true 00:08:13.833 05:48:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:13.833 05:48:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:13.833 05:48:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:13.833 05:48:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:13.833 05:48:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:13.833 05:48:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:13.833 05:48:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:13.833 05:48:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:14.092 05:48:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:14.092 05:48:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:14.092 05:48:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:14.092 05:48:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:14.092 05:48:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:14.092 05:48:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:14.092 05:48:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:14.092 05:48:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:14.092 05:48:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:14.092 05:48:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:14.092 05:48:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:14.092 05:48:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:14.092 05:48:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:14.092 05:48:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:14.092 05:48:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:14.092 05:48:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:14.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:14.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:08:14.092 00:08:14.092 --- 10.0.0.2 ping statistics --- 00:08:14.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.092 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:08:14.092 05:48:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:14.092 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:14.092 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:08:14.092 00:08:14.092 --- 10.0.0.3 ping statistics --- 00:08:14.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.092 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:14.092 05:48:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:14.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:14.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:08:14.092 00:08:14.092 --- 10.0.0.1 ping statistics --- 00:08:14.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.092 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:08:14.092 05:48:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:14.092 05:48:35 -- nvmf/common.sh@421 -- # return 0 00:08:14.092 05:48:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:14.092 05:48:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:14.092 05:48:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:14.092 05:48:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:14.092 05:48:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:14.092 05:48:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:14.092 05:48:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:14.092 05:48:35 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:08:14.092 05:48:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:14.092 05:48:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.092 05:48:35 -- common/autotest_common.sh@10 -- # set +x 00:08:14.092 ************************************ 00:08:14.092 START TEST nvmf_host_management 00:08:14.092 ************************************ 00:08:14.092 05:48:35 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:08:14.092 05:48:35 -- target/host_management.sh@69 -- # starttarget 00:08:14.092 05:48:35 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:14.092 05:48:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:14.092 05:48:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:14.092 05:48:35 -- common/autotest_common.sh@10 -- # set +x 00:08:14.092 05:48:35 -- nvmf/common.sh@469 -- # nvmfpid=71684 00:08:14.092 05:48:35 -- nvmf/common.sh@470 -- # waitforlisten 71684 00:08:14.092 05:48:35 -- common/autotest_common.sh@829 -- # '[' -z 71684 ']' 00:08:14.092 05:48:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.092 05:48:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:14.092 05:48:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:14.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.092 05:48:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.092 05:48:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:14.092 05:48:35 -- common/autotest_common.sh@10 -- # set +x 00:08:14.352 [2024-12-15 05:48:35.750116] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:14.352 [2024-12-15 05:48:35.750215] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.352 [2024-12-15 05:48:35.891442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:14.352 [2024-12-15 05:48:35.935483] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:14.352 [2024-12-15 05:48:35.935657] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.352 [2024-12-15 05:48:35.935673] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.352 [2024-12-15 05:48:35.935683] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.352 [2024-12-15 05:48:35.935899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.352 [2024-12-15 05:48:35.936038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.352 [2024-12-15 05:48:35.936601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:08:14.352 [2024-12-15 05:48:35.936638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.611 05:48:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:14.611 05:48:36 -- common/autotest_common.sh@862 -- # return 0 00:08:14.611 05:48:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:14.611 05:48:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:14.611 05:48:36 -- common/autotest_common.sh@10 -- # set +x 00:08:14.611 05:48:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.611 05:48:36 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:14.611 05:48:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.611 05:48:36 -- common/autotest_common.sh@10 -- # set +x 00:08:14.611 [2024-12-15 05:48:36.066151] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.611 05:48:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.611 05:48:36 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:14.611 05:48:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:14.611 05:48:36 -- common/autotest_common.sh@10 -- # set +x 00:08:14.611 05:48:36 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:14.611 05:48:36 -- target/host_management.sh@23 -- # cat 00:08:14.611 05:48:36 -- target/host_management.sh@30 -- # rpc_cmd 00:08:14.611 05:48:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.611 05:48:36 -- common/autotest_common.sh@10 -- # set +x 00:08:14.611 Malloc0 00:08:14.611 [2024-12-15 05:48:36.130954] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.611 05:48:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.611 05:48:36 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:14.611 05:48:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:14.611 05:48:36 -- common/autotest_common.sh@10 -- # set +x 00:08:14.611 05:48:36 -- target/host_management.sh@73 -- # perfpid=71730 00:08:14.611 05:48:36 -- target/host_management.sh@74 -- # waitforlisten 71730 /var/tmp/bdevperf.sock 00:08:14.611 05:48:36 -- common/autotest_common.sh@829 -- # '[' -z 71730 ']' 00:08:14.611 05:48:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:14.611 05:48:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:14.611 05:48:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:14.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:14.611 05:48:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:14.611 05:48:36 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:14.611 05:48:36 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:14.611 05:48:36 -- common/autotest_common.sh@10 -- # set +x 00:08:14.611 05:48:36 -- nvmf/common.sh@520 -- # config=() 00:08:14.611 05:48:36 -- nvmf/common.sh@520 -- # local subsystem config 00:08:14.611 05:48:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:14.611 05:48:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:14.611 { 00:08:14.611 "params": { 00:08:14.611 "name": "Nvme$subsystem", 00:08:14.611 "trtype": "$TEST_TRANSPORT", 00:08:14.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:14.612 "adrfam": "ipv4", 00:08:14.612 "trsvcid": "$NVMF_PORT", 00:08:14.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:14.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:14.612 "hdgst": ${hdgst:-false}, 00:08:14.612 "ddgst": ${ddgst:-false} 00:08:14.612 }, 00:08:14.612 "method": "bdev_nvme_attach_controller" 00:08:14.612 } 00:08:14.612 EOF 00:08:14.612 )") 00:08:14.612 05:48:36 -- nvmf/common.sh@542 -- # cat 00:08:14.612 05:48:36 -- nvmf/common.sh@544 -- # jq . 00:08:14.612 05:48:36 -- nvmf/common.sh@545 -- # IFS=, 00:08:14.612 05:48:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:14.612 "params": { 00:08:14.612 "name": "Nvme0", 00:08:14.612 "trtype": "tcp", 00:08:14.612 "traddr": "10.0.0.2", 00:08:14.612 "adrfam": "ipv4", 00:08:14.612 "trsvcid": "4420", 00:08:14.612 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:14.612 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:14.612 "hdgst": false, 00:08:14.612 "ddgst": false 00:08:14.612 }, 00:08:14.612 "method": "bdev_nvme_attach_controller" 00:08:14.612 }' 00:08:14.612 [2024-12-15 05:48:36.228648] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:14.612 [2024-12-15 05:48:36.228734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71730 ] 00:08:14.871 [2024-12-15 05:48:36.370822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.871 [2024-12-15 05:48:36.411468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.130 Running I/O for 10 seconds... 00:08:15.698 05:48:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:15.698 05:48:37 -- common/autotest_common.sh@862 -- # return 0 00:08:15.698 05:48:37 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:15.698 05:48:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.698 05:48:37 -- common/autotest_common.sh@10 -- # set +x 00:08:15.698 05:48:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.698 05:48:37 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:15.698 05:48:37 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:15.698 05:48:37 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:15.698 05:48:37 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:15.698 05:48:37 -- target/host_management.sh@52 -- # local ret=1 00:08:15.698 05:48:37 -- target/host_management.sh@53 -- # local i 00:08:15.698 05:48:37 -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:15.698 05:48:37 -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:15.698 05:48:37 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:15.698 05:48:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.698 05:48:37 -- common/autotest_common.sh@10 -- # set +x 00:08:15.698 05:48:37 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:15.698 05:48:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.958 05:48:37 -- target/host_management.sh@55 -- # read_io_count=2174 00:08:15.958 05:48:37 -- target/host_management.sh@58 -- # '[' 2174 -ge 100 ']' 00:08:15.958 05:48:37 -- target/host_management.sh@59 -- # ret=0 00:08:15.958 05:48:37 -- target/host_management.sh@60 -- # break 00:08:15.958 05:48:37 -- target/host_management.sh@64 -- # return 0 00:08:15.958 05:48:37 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:15.958 05:48:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.958 05:48:37 -- common/autotest_common.sh@10 -- # set +x 00:08:15.958 [2024-12-15 05:48:37.345760] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9924f0 is same with the state(5) to be set 00:08:15.958 [2024-12-15 05:48:37.345817] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9924f0 is same with the state(5) to be set 00:08:15.958 [2024-12-15 05:48:37.345828] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9924f0 is same with the state(5) to be set 00:08:15.958 [2024-12-15 05:48:37.345837] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9924f0 is same with the state(5) to be set 00:08:15.958 [2024-12-15 05:48:37.345844] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9924f0 is same with the state(5) to be set 00:08:15.958 [2024-12-15 05:48:37.345852] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9924f0 is same with the state(5) to be set 00:08:15.958 [2024-12-15 05:48:37.345859] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9924f0 is same with the state(5) to be set 00:08:15.958 [2024-12-15 05:48:37.345866] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9924f0 is same with the state(5) to be set 00:08:15.958 [2024-12-15 05:48:37.345874] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9924f0 is same with the state(5) to be set 00:08:15.958 [2024-12-15 05:48:37.345898] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9924f0 is same with the state(5) to be set 00:08:15.958 [2024-12-15 05:48:37.345935] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9924f0 is same with the state(5) to be set 00:08:15.959 [2024-12-15 05:48:37.345944] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9924f0 is same with the state(5) to be set 00:08:15.959 [2024-12-15 05:48:37.345952] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9924f0 is same with the state(5) to be set 00:08:15.959 [2024-12-15 05:48:37.345960] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9924f0 is same with the state(5) to be set 00:08:15.959 [2024-12-15 05:48:37.345969] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9924f0 is same with the state(5) to be set 00:08:15.959 [2024-12-15 05:48:37.345977] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9924f0 is same with the state(5) to be set 00:08:15.959 [2024-12-15 05:48:37.345986] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9924f0 is same with the state(5) to be set 00:08:15.959 [2024-12-15 05:48:37.345994] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9924f0 is same with the state(5) to be set 00:08:15.959 [2024-12-15 05:48:37.346002] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9924f0 is same with the state(5) to be set 00:08:15.959 [2024-12-15 05:48:37.346011] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9924f0 is same with the state(5) to be set 00:08:15.959 [2024-12-15 05:48:37.346019] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9924f0 is same with the state(5) to be set 00:08:15.959 [2024-12-15 05:48:37.346027] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9924f0 is same with the state(5) to be set 00:08:15.959 [2024-12-15 05:48:37.346035] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9924f0 is same with the state(5) to be set 00:08:15.959 [2024-12-15 05:48:37.346043] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9924f0 is same with the state(5) to be set 00:08:15.959 [2024-12-15 05:48:37.346051] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9924f0 is same with the state(5) to be set 00:08:15.959 [2024-12-15 05:48:37.346060] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9924f0 is same with the state(5) to be set 00:08:15.959 [2024-12-15 05:48:37.346147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.959 [2024-12-15 05:48:37.346178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.959 [2024-12-15 05:48:37.346210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.959 [2024-12-15 05:48:37.346221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.959 [2024-12-15 05:48:37.346248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.959 [2024-12-15 05:48:37.346272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.959 [2024-12-15 05:48:37.346297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.959 [2024-12-15 05:48:37.346306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.959 [2024-12-15 05:48:37.346316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.959 [2024-12-15 05:48:37.346325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.959 [2024-12-15 05:48:37.346335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.959 [2024-12-15 05:48:37.346344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.959 [2024-12-15 05:48:37.346354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.959 [2024-12-15 05:48:37.346362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.959 [2024-12-15 05:48:37.346373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.959 [2024-12-15 05:48:37.346381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.959 [2024-12-15 05:48:37.346391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.959 [2024-12-15 05:48:37.346399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.959 [2024-12-15 05:48:37.346410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.959 [2024-12-15 05:48:37.346418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.959 [2024-12-15 05:48:37.346428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.959 [2024-12-15 05:48:37.346436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.959 [2024-12-15 05:48:37.346447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.959 [2024-12-15 05:48:37.346455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.959 [2024-12-15 05:48:37.346465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.959 [2024-12-15 05:48:37.346473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.959 [2024-12-15 05:48:37.346483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.959 [2024-12-15 05:48:37.346493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.959 [2024-12-15 05:48:37.346503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.959 [2024-12-15 05:48:37.346512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.959 [2024-12-15 05:48:37.346525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.959 [2024-12-15 05:48:37.346534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.959 [2024-12-15 05:48:37.346544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.959 [2024-12-15 05:48:37.346553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.959 [2024-12-15 05:48:37.346564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.959 [2024-12-15 05:48:37.346572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.959 [2024-12-15 05:48:37.346582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.959 [2024-12-15 05:48:37.346590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.959 [2024-12-15 05:48:37.346600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.959 [2024-12-15 05:48:37.346609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.959 [2024-12-15 05:48:37.346619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.959 [2024-12-15 05:48:37.346627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.959 [2024-12-15 05:48:37.346636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.959 [2024-12-15 05:48:37.346645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.959 [2024-12-15 05:48:37.346655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.959 [2024-12-15 05:48:37.346663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.959 [2024-12-15 05:48:37.346673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.959 [2024-12-15 05:48:37.346681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.959 [2024-12-15 05:48:37.346691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.959 [2024-12-15 05:48:37.346699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.959 [2024-12-15 05:48:37.346709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.959 [2024-12-15 05:48:37.346717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.346728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.346737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.346747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.346755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.346765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.346773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.346783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.346791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.346802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.346812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.346824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.346832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.346842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.346851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.346861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.346869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.346895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.346920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.346932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.346941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.346968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.346979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.346990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.346999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.347011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.347021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.347033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.347042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.347053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.347062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.347074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.347083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.347094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.347103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.347114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.347123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.347134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.347143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.347155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.347164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.347175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.347186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.347199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.347209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.347221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.347230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.347252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.347262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.347274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.347284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.347295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.347305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.347316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.347325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.347336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.347346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.347357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.347366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.347377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.347386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.347398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.347407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.347418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.347427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.347439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.347447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.347458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.347467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.347479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.347488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.347499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.347508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.960 [2024-12-15 05:48:37.347519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.960 [2024-12-15 05:48:37.347531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.961 [2024-12-15 05:48:37.347544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.961 [2024-12-15 05:48:37.347554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.961 [2024-12-15 05:48:37.347565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f10120 is same with the state(5) to be set 00:08:15.961 [2024-12-15 05:48:37.347653] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f10120 was disconnected and freed. reset controller. 00:08:15.961 05:48:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.961 05:48:37 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:15.961 05:48:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.961 [2024-12-15 05:48:37.348812] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:15.961 05:48:37 -- common/autotest_common.sh@10 -- # set +x 00:08:15.961 task offset: 34432 on job bdev=Nvme0n1 fails 00:08:15.961 00:08:15.961 Latency(us) 00:08:15.961 [2024-12-15T05:48:37.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.961 [2024-12-15T05:48:37.602Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:15.961 [2024-12-15T05:48:37.602Z] Job: Nvme0n1 ended in about 0.80 seconds with error 00:08:15.961 Verification LBA range: start 0x0 length 0x400 00:08:15.961 Nvme0n1 : 0.80 2876.57 179.79 79.84 0.00 21303.33 4825.83 30742.34 00:08:15.961 [2024-12-15T05:48:37.602Z] =================================================================================================================== 00:08:15.961 [2024-12-15T05:48:37.602Z] Total : 2876.57 179.79 79.84 0.00 21303.33 4825.83 30742.34 00:08:15.961 [2024-12-15 05:48:37.350840] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:15.961 [2024-12-15 05:48:37.350868] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f126a0 (9): Bad file descriptor 00:08:15.961 05:48:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.961 05:48:37 -- target/host_management.sh@87 -- # sleep 1 00:08:15.961 [2024-12-15 05:48:37.363404] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:16.897 05:48:38 -- target/host_management.sh@91 -- # kill -9 71730 00:08:16.897 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (71730) - No such process 00:08:16.897 05:48:38 -- target/host_management.sh@91 -- # true 00:08:16.897 05:48:38 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:16.897 05:48:38 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:16.897 05:48:38 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:16.897 05:48:38 -- nvmf/common.sh@520 -- # config=() 00:08:16.897 05:48:38 -- nvmf/common.sh@520 -- # local subsystem config 00:08:16.897 05:48:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:16.897 05:48:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:16.897 { 00:08:16.897 "params": { 00:08:16.897 "name": "Nvme$subsystem", 00:08:16.897 "trtype": "$TEST_TRANSPORT", 00:08:16.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:16.897 "adrfam": "ipv4", 00:08:16.897 "trsvcid": "$NVMF_PORT", 00:08:16.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:16.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:16.897 "hdgst": ${hdgst:-false}, 00:08:16.897 "ddgst": ${ddgst:-false} 00:08:16.897 }, 00:08:16.897 "method": "bdev_nvme_attach_controller" 00:08:16.897 } 00:08:16.897 EOF 00:08:16.897 )") 00:08:16.897 05:48:38 -- nvmf/common.sh@542 -- # cat 00:08:16.897 05:48:38 -- nvmf/common.sh@544 -- # jq . 00:08:16.897 05:48:38 -- nvmf/common.sh@545 -- # IFS=, 00:08:16.897 05:48:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:16.897 "params": { 00:08:16.897 "name": "Nvme0", 00:08:16.897 "trtype": "tcp", 00:08:16.897 "traddr": "10.0.0.2", 00:08:16.897 "adrfam": "ipv4", 00:08:16.897 "trsvcid": "4420", 00:08:16.897 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:16.898 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:16.898 "hdgst": false, 00:08:16.898 "ddgst": false 00:08:16.898 }, 00:08:16.898 "method": "bdev_nvme_attach_controller" 00:08:16.898 }' 00:08:16.898 [2024-12-15 05:48:38.407661] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:16.898 [2024-12-15 05:48:38.407733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71774 ] 00:08:17.157 [2024-12-15 05:48:38.540643] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.157 [2024-12-15 05:48:38.573325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.157 Running I/O for 1 seconds... 00:08:18.094 00:08:18.094 Latency(us) 00:08:18.094 [2024-12-15T05:48:39.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.094 [2024-12-15T05:48:39.735Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:18.094 Verification LBA range: start 0x0 length 0x400 00:08:18.094 Nvme0n1 : 1.01 3099.81 193.74 0.00 0.00 20334.83 826.65 27644.28 00:08:18.094 [2024-12-15T05:48:39.735Z] =================================================================================================================== 00:08:18.094 [2024-12-15T05:48:39.735Z] Total : 3099.81 193.74 0.00 0.00 20334.83 826.65 27644.28 00:08:18.353 05:48:39 -- target/host_management.sh@101 -- # stoptarget 00:08:18.353 05:48:39 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:18.353 05:48:39 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:18.353 05:48:39 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:18.353 05:48:39 -- target/host_management.sh@40 -- # nvmftestfini 00:08:18.353 05:48:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:18.353 05:48:39 -- nvmf/common.sh@116 -- # sync 00:08:18.353 05:48:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:18.353 05:48:39 -- nvmf/common.sh@119 -- # set +e 00:08:18.353 05:48:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:18.353 05:48:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:18.353 rmmod nvme_tcp 00:08:18.353 rmmod nvme_fabrics 00:08:18.353 rmmod nvme_keyring 00:08:18.353 05:48:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:18.353 05:48:39 -- nvmf/common.sh@123 -- # set -e 00:08:18.353 05:48:39 -- nvmf/common.sh@124 -- # return 0 00:08:18.353 05:48:39 -- nvmf/common.sh@477 -- # '[' -n 71684 ']' 00:08:18.353 05:48:39 -- nvmf/common.sh@478 -- # killprocess 71684 00:08:18.353 05:48:39 -- common/autotest_common.sh@936 -- # '[' -z 71684 ']' 00:08:18.353 05:48:39 -- common/autotest_common.sh@940 -- # kill -0 71684 00:08:18.353 05:48:39 -- common/autotest_common.sh@941 -- # uname 00:08:18.353 05:48:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:18.353 05:48:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71684 00:08:18.612 05:48:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:18.612 05:48:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:18.612 killing process with pid 71684 00:08:18.612 05:48:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71684' 00:08:18.612 05:48:40 -- common/autotest_common.sh@955 -- # kill 71684 00:08:18.612 05:48:40 -- common/autotest_common.sh@960 -- # wait 71684 00:08:18.612 [2024-12-15 05:48:40.145346] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:18.612 05:48:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:18.612 05:48:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:18.612 05:48:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:18.612 05:48:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:18.612 05:48:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:18.612 05:48:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.612 05:48:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.612 05:48:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.612 05:48:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:18.612 00:08:18.612 real 0m4.509s 00:08:18.612 user 0m19.290s 00:08:18.612 sys 0m1.127s 00:08:18.612 05:48:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:18.612 05:48:40 -- common/autotest_common.sh@10 -- # set +x 00:08:18.612 ************************************ 00:08:18.612 END TEST nvmf_host_management 00:08:18.612 ************************************ 00:08:18.612 05:48:40 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:08:18.612 00:08:18.612 real 0m5.184s 00:08:18.612 user 0m19.491s 00:08:18.612 sys 0m1.386s 00:08:18.612 05:48:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:18.612 05:48:40 -- common/autotest_common.sh@10 -- # set +x 00:08:18.612 ************************************ 00:08:18.612 END TEST nvmf_host_management 00:08:18.612 ************************************ 00:08:18.872 05:48:40 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:18.872 05:48:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:18.872 05:48:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.872 05:48:40 -- common/autotest_common.sh@10 -- # set +x 00:08:18.872 ************************************ 00:08:18.872 START TEST nvmf_lvol 00:08:18.872 ************************************ 00:08:18.872 05:48:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:18.872 * Looking for test storage... 00:08:18.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:18.872 05:48:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:18.872 05:48:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:18.872 05:48:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:18.872 05:48:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:18.872 05:48:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:18.872 05:48:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:18.872 05:48:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:18.872 05:48:40 -- scripts/common.sh@335 -- # IFS=.-: 00:08:18.872 05:48:40 -- scripts/common.sh@335 -- # read -ra ver1 00:08:18.872 05:48:40 -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.872 05:48:40 -- scripts/common.sh@336 -- # read -ra ver2 00:08:18.872 05:48:40 -- scripts/common.sh@337 -- # local 'op=<' 00:08:18.872 05:48:40 -- scripts/common.sh@339 -- # ver1_l=2 00:08:18.872 05:48:40 -- scripts/common.sh@340 -- # ver2_l=1 00:08:18.872 05:48:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:18.872 05:48:40 -- scripts/common.sh@343 -- # case "$op" in 00:08:18.872 05:48:40 -- scripts/common.sh@344 -- # : 1 00:08:18.872 05:48:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:18.872 05:48:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.872 05:48:40 -- scripts/common.sh@364 -- # decimal 1 00:08:18.872 05:48:40 -- scripts/common.sh@352 -- # local d=1 00:08:18.872 05:48:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.872 05:48:40 -- scripts/common.sh@354 -- # echo 1 00:08:18.872 05:48:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:18.872 05:48:40 -- scripts/common.sh@365 -- # decimal 2 00:08:18.872 05:48:40 -- scripts/common.sh@352 -- # local d=2 00:08:18.872 05:48:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.872 05:48:40 -- scripts/common.sh@354 -- # echo 2 00:08:18.872 05:48:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:18.872 05:48:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:18.872 05:48:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:18.872 05:48:40 -- scripts/common.sh@367 -- # return 0 00:08:18.872 05:48:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.872 05:48:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:18.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.872 --rc genhtml_branch_coverage=1 00:08:18.872 --rc genhtml_function_coverage=1 00:08:18.872 --rc genhtml_legend=1 00:08:18.872 --rc geninfo_all_blocks=1 00:08:18.872 --rc geninfo_unexecuted_blocks=1 00:08:18.872 00:08:18.872 ' 00:08:18.872 05:48:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:18.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.872 --rc genhtml_branch_coverage=1 00:08:18.872 --rc genhtml_function_coverage=1 00:08:18.872 --rc genhtml_legend=1 00:08:18.872 --rc geninfo_all_blocks=1 00:08:18.872 --rc geninfo_unexecuted_blocks=1 00:08:18.872 00:08:18.872 ' 00:08:18.872 05:48:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:18.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.872 --rc genhtml_branch_coverage=1 00:08:18.872 --rc genhtml_function_coverage=1 00:08:18.872 --rc genhtml_legend=1 00:08:18.872 --rc geninfo_all_blocks=1 00:08:18.872 --rc geninfo_unexecuted_blocks=1 00:08:18.872 00:08:18.872 ' 00:08:18.872 05:48:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:18.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.872 --rc genhtml_branch_coverage=1 00:08:18.872 --rc genhtml_function_coverage=1 00:08:18.872 --rc genhtml_legend=1 00:08:18.872 --rc geninfo_all_blocks=1 00:08:18.872 --rc geninfo_unexecuted_blocks=1 00:08:18.872 00:08:18.872 ' 00:08:18.872 05:48:40 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:18.872 05:48:40 -- nvmf/common.sh@7 -- # uname -s 00:08:18.872 05:48:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.872 05:48:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.872 05:48:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.872 05:48:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.872 05:48:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:18.872 05:48:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:18.872 05:48:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.872 05:48:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:18.872 05:48:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.872 05:48:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:18.872 05:48:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:08:18.872 05:48:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:08:18.872 05:48:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.872 05:48:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:18.872 05:48:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:18.872 05:48:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:18.872 05:48:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.872 05:48:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.872 05:48:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.872 05:48:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.872 05:48:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.872 05:48:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.872 05:48:40 -- paths/export.sh@5 -- # export PATH 00:08:18.872 05:48:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.872 05:48:40 -- nvmf/common.sh@46 -- # : 0 00:08:18.872 05:48:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:18.872 05:48:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:18.872 05:48:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:18.872 05:48:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.872 05:48:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.872 05:48:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:18.872 05:48:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:18.872 05:48:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:18.872 05:48:40 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:18.872 05:48:40 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:18.872 05:48:40 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:18.872 05:48:40 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:18.872 05:48:40 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:18.872 05:48:40 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:18.872 05:48:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:18.872 05:48:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:18.872 05:48:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:18.872 05:48:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:18.872 05:48:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:18.872 05:48:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.872 05:48:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.872 05:48:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.872 05:48:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:18.872 05:48:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:18.872 05:48:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:18.872 05:48:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:18.872 05:48:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:18.872 05:48:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:18.872 05:48:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:18.872 05:48:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:18.873 05:48:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:18.873 05:48:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:18.873 05:48:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:18.873 05:48:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:18.873 05:48:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:18.873 05:48:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:18.873 05:48:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:18.873 05:48:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:18.873 05:48:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:18.873 05:48:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:18.873 05:48:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:19.131 05:48:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:19.131 Cannot find device "nvmf_tgt_br" 00:08:19.131 05:48:40 -- nvmf/common.sh@154 -- # true 00:08:19.131 05:48:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:19.132 Cannot find device "nvmf_tgt_br2" 00:08:19.132 05:48:40 -- nvmf/common.sh@155 -- # true 00:08:19.132 05:48:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:19.132 05:48:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:19.132 Cannot find device "nvmf_tgt_br" 00:08:19.132 05:48:40 -- nvmf/common.sh@157 -- # true 00:08:19.132 05:48:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:19.132 Cannot find device "nvmf_tgt_br2" 00:08:19.132 05:48:40 -- nvmf/common.sh@158 -- # true 00:08:19.132 05:48:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:19.132 05:48:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:19.132 05:48:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:19.132 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:19.132 05:48:40 -- nvmf/common.sh@161 -- # true 00:08:19.132 05:48:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:19.132 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:19.132 05:48:40 -- nvmf/common.sh@162 -- # true 00:08:19.132 05:48:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:19.132 05:48:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:19.132 05:48:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:19.132 05:48:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:19.132 05:48:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:19.132 05:48:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:19.132 05:48:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:19.132 05:48:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:19.132 05:48:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:19.132 05:48:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:19.132 05:48:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:19.132 05:48:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:19.132 05:48:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:19.132 05:48:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:19.132 05:48:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:19.132 05:48:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:19.132 05:48:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:19.132 05:48:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:19.132 05:48:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:19.132 05:48:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:19.390 05:48:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:19.390 05:48:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:19.391 05:48:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:19.391 05:48:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:19.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:08:19.391 00:08:19.391 --- 10.0.0.2 ping statistics --- 00:08:19.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.391 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:08:19.391 05:48:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:19.391 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:19.391 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:08:19.391 00:08:19.391 --- 10.0.0.3 ping statistics --- 00:08:19.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.391 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:08:19.391 05:48:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:19.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:08:19.391 00:08:19.391 --- 10.0.0.1 ping statistics --- 00:08:19.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.391 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:08:19.391 05:48:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.391 05:48:40 -- nvmf/common.sh@421 -- # return 0 00:08:19.391 05:48:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:19.391 05:48:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.391 05:48:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:19.391 05:48:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:19.391 05:48:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.391 05:48:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:19.391 05:48:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:19.391 05:48:40 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:19.391 05:48:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:19.391 05:48:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:19.391 05:48:40 -- common/autotest_common.sh@10 -- # set +x 00:08:19.391 05:48:40 -- nvmf/common.sh@469 -- # nvmfpid=72003 00:08:19.391 05:48:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:19.391 05:48:40 -- nvmf/common.sh@470 -- # waitforlisten 72003 00:08:19.391 05:48:40 -- common/autotest_common.sh@829 -- # '[' -z 72003 ']' 00:08:19.391 05:48:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.391 05:48:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:19.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.391 05:48:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.391 05:48:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:19.391 05:48:40 -- common/autotest_common.sh@10 -- # set +x 00:08:19.391 [2024-12-15 05:48:40.884994] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:19.391 [2024-12-15 05:48:40.885063] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.391 [2024-12-15 05:48:41.021799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:19.650 [2024-12-15 05:48:41.054983] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:19.650 [2024-12-15 05:48:41.055147] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.650 [2024-12-15 05:48:41.055159] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.650 [2024-12-15 05:48:41.055184] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.650 [2024-12-15 05:48:41.055349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.650 [2024-12-15 05:48:41.055666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.650 [2024-12-15 05:48:41.055743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.587 05:48:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:20.587 05:48:41 -- common/autotest_common.sh@862 -- # return 0 00:08:20.587 05:48:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:20.587 05:48:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:20.587 05:48:41 -- common/autotest_common.sh@10 -- # set +x 00:08:20.587 05:48:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.587 05:48:41 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:20.587 [2024-12-15 05:48:42.173479] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.587 05:48:42 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:21.155 05:48:42 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:21.155 05:48:42 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:21.155 05:48:42 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:21.155 05:48:42 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:21.414 05:48:43 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:21.673 05:48:43 -- target/nvmf_lvol.sh@29 -- # lvs=ef8fa897-ce01-4386-a56a-0f4955355ca3 00:08:21.673 05:48:43 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ef8fa897-ce01-4386-a56a-0f4955355ca3 lvol 20 00:08:21.932 05:48:43 -- target/nvmf_lvol.sh@32 -- # lvol=535222df-feef-41ac-8a2e-1b945db0a5dc 00:08:21.932 05:48:43 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:22.191 05:48:43 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 535222df-feef-41ac-8a2e-1b945db0a5dc 00:08:22.449 05:48:44 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:22.708 [2024-12-15 05:48:44.259766] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.708 05:48:44 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:22.984 05:48:44 -- target/nvmf_lvol.sh@42 -- # perf_pid=72084 00:08:22.984 05:48:44 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:22.984 05:48:44 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:23.954 05:48:45 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 535222df-feef-41ac-8a2e-1b945db0a5dc MY_SNAPSHOT 00:08:24.521 05:48:45 -- target/nvmf_lvol.sh@47 -- # snapshot=9230e1ee-a474-47f9-aece-deb2e25d729e 00:08:24.521 05:48:45 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 535222df-feef-41ac-8a2e-1b945db0a5dc 30 00:08:24.521 05:48:46 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 9230e1ee-a474-47f9-aece-deb2e25d729e MY_CLONE 00:08:24.779 05:48:46 -- target/nvmf_lvol.sh@49 -- # clone=47287e0e-2869-49a2-926c-37fe1d8f4a82 00:08:24.779 05:48:46 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 47287e0e-2869-49a2-926c-37fe1d8f4a82 00:08:25.346 05:48:46 -- target/nvmf_lvol.sh@53 -- # wait 72084 00:08:33.466 Initializing NVMe Controllers 00:08:33.466 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:33.466 Controller IO queue size 128, less than required. 00:08:33.466 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:33.466 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:33.466 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:33.466 Initialization complete. Launching workers. 00:08:33.466 ======================================================== 00:08:33.466 Latency(us) 00:08:33.466 Device Information : IOPS MiB/s Average min max 00:08:33.466 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10450.80 40.82 12252.61 2111.87 67278.38 00:08:33.466 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10425.60 40.72 12276.85 2842.96 59263.52 00:08:33.466 ======================================================== 00:08:33.466 Total : 20876.40 81.55 12264.72 2111.87 67278.38 00:08:33.466 00:08:33.466 05:48:54 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:33.466 05:48:55 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 535222df-feef-41ac-8a2e-1b945db0a5dc 00:08:34.033 05:48:55 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ef8fa897-ce01-4386-a56a-0f4955355ca3 00:08:34.033 05:48:55 -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:34.033 05:48:55 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:34.033 05:48:55 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:34.033 05:48:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:34.033 05:48:55 -- nvmf/common.sh@116 -- # sync 00:08:34.293 05:48:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:34.293 05:48:55 -- nvmf/common.sh@119 -- # set +e 00:08:34.293 05:48:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:34.293 05:48:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:34.293 rmmod nvme_tcp 00:08:34.293 rmmod nvme_fabrics 00:08:34.293 rmmod nvme_keyring 00:08:34.293 05:48:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:34.293 05:48:55 -- nvmf/common.sh@123 -- # set -e 00:08:34.293 05:48:55 -- nvmf/common.sh@124 -- # return 0 00:08:34.293 05:48:55 -- nvmf/common.sh@477 -- # '[' -n 72003 ']' 00:08:34.293 05:48:55 -- nvmf/common.sh@478 -- # killprocess 72003 00:08:34.293 05:48:55 -- common/autotest_common.sh@936 -- # '[' -z 72003 ']' 00:08:34.293 05:48:55 -- common/autotest_common.sh@940 -- # kill -0 72003 00:08:34.293 05:48:55 -- common/autotest_common.sh@941 -- # uname 00:08:34.293 05:48:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:34.293 05:48:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72003 00:08:34.293 05:48:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:34.293 05:48:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:34.293 05:48:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72003' 00:08:34.293 killing process with pid 72003 00:08:34.293 05:48:55 -- common/autotest_common.sh@955 -- # kill 72003 00:08:34.293 05:48:55 -- common/autotest_common.sh@960 -- # wait 72003 00:08:34.553 05:48:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:34.553 05:48:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:34.553 05:48:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:34.553 05:48:55 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:34.553 05:48:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:34.553 05:48:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.553 05:48:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.553 05:48:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.553 05:48:55 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:34.553 00:08:34.553 real 0m15.703s 00:08:34.553 user 1m5.158s 00:08:34.553 sys 0m4.485s 00:08:34.553 05:48:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:34.553 05:48:55 -- common/autotest_common.sh@10 -- # set +x 00:08:34.553 ************************************ 00:08:34.553 END TEST nvmf_lvol 00:08:34.553 ************************************ 00:08:34.553 05:48:56 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:34.553 05:48:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:34.553 05:48:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:34.553 05:48:56 -- common/autotest_common.sh@10 -- # set +x 00:08:34.553 ************************************ 00:08:34.553 START TEST nvmf_lvs_grow 00:08:34.553 ************************************ 00:08:34.553 05:48:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:34.553 * Looking for test storage... 00:08:34.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:34.553 05:48:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:34.553 05:48:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:34.553 05:48:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:34.553 05:48:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:34.553 05:48:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:34.553 05:48:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:34.553 05:48:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:34.553 05:48:56 -- scripts/common.sh@335 -- # IFS=.-: 00:08:34.553 05:48:56 -- scripts/common.sh@335 -- # read -ra ver1 00:08:34.553 05:48:56 -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.553 05:48:56 -- scripts/common.sh@336 -- # read -ra ver2 00:08:34.553 05:48:56 -- scripts/common.sh@337 -- # local 'op=<' 00:08:34.553 05:48:56 -- scripts/common.sh@339 -- # ver1_l=2 00:08:34.553 05:48:56 -- scripts/common.sh@340 -- # ver2_l=1 00:08:34.553 05:48:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:34.553 05:48:56 -- scripts/common.sh@343 -- # case "$op" in 00:08:34.553 05:48:56 -- scripts/common.sh@344 -- # : 1 00:08:34.553 05:48:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:34.553 05:48:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.553 05:48:56 -- scripts/common.sh@364 -- # decimal 1 00:08:34.553 05:48:56 -- scripts/common.sh@352 -- # local d=1 00:08:34.553 05:48:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.553 05:48:56 -- scripts/common.sh@354 -- # echo 1 00:08:34.553 05:48:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:34.553 05:48:56 -- scripts/common.sh@365 -- # decimal 2 00:08:34.553 05:48:56 -- scripts/common.sh@352 -- # local d=2 00:08:34.553 05:48:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.553 05:48:56 -- scripts/common.sh@354 -- # echo 2 00:08:34.553 05:48:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:34.553 05:48:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:34.553 05:48:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:34.553 05:48:56 -- scripts/common.sh@367 -- # return 0 00:08:34.553 05:48:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.553 05:48:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:34.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.553 --rc genhtml_branch_coverage=1 00:08:34.553 --rc genhtml_function_coverage=1 00:08:34.553 --rc genhtml_legend=1 00:08:34.553 --rc geninfo_all_blocks=1 00:08:34.553 --rc geninfo_unexecuted_blocks=1 00:08:34.553 00:08:34.553 ' 00:08:34.553 05:48:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:34.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.553 --rc genhtml_branch_coverage=1 00:08:34.553 --rc genhtml_function_coverage=1 00:08:34.553 --rc genhtml_legend=1 00:08:34.553 --rc geninfo_all_blocks=1 00:08:34.553 --rc geninfo_unexecuted_blocks=1 00:08:34.553 00:08:34.553 ' 00:08:34.553 05:48:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:34.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.553 --rc genhtml_branch_coverage=1 00:08:34.553 --rc genhtml_function_coverage=1 00:08:34.553 --rc genhtml_legend=1 00:08:34.553 --rc geninfo_all_blocks=1 00:08:34.553 --rc geninfo_unexecuted_blocks=1 00:08:34.553 00:08:34.553 ' 00:08:34.553 05:48:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:34.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.553 --rc genhtml_branch_coverage=1 00:08:34.553 --rc genhtml_function_coverage=1 00:08:34.553 --rc genhtml_legend=1 00:08:34.553 --rc geninfo_all_blocks=1 00:08:34.553 --rc geninfo_unexecuted_blocks=1 00:08:34.553 00:08:34.553 ' 00:08:34.553 05:48:56 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:34.553 05:48:56 -- nvmf/common.sh@7 -- # uname -s 00:08:34.553 05:48:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.553 05:48:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.553 05:48:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.553 05:48:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.553 05:48:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.553 05:48:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.553 05:48:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.553 05:48:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.553 05:48:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.553 05:48:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.553 05:48:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:08:34.553 05:48:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:08:34.553 05:48:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.553 05:48:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.553 05:48:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:34.553 05:48:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:34.553 05:48:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.553 05:48:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.553 05:48:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.554 05:48:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.554 05:48:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.554 05:48:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.554 05:48:56 -- paths/export.sh@5 -- # export PATH 00:08:34.554 05:48:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.554 05:48:56 -- nvmf/common.sh@46 -- # : 0 00:08:34.554 05:48:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:34.554 05:48:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:34.554 05:48:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:34.554 05:48:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.554 05:48:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.554 05:48:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:34.554 05:48:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:34.814 05:48:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:34.814 05:48:56 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:34.814 05:48:56 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:34.814 05:48:56 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:08:34.814 05:48:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:34.814 05:48:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.814 05:48:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:34.814 05:48:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:34.814 05:48:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:34.814 05:48:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.814 05:48:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.814 05:48:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.814 05:48:56 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:34.814 05:48:56 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:34.814 05:48:56 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:34.814 05:48:56 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:34.814 05:48:56 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:34.814 05:48:56 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:34.814 05:48:56 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.814 05:48:56 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:34.814 05:48:56 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:34.814 05:48:56 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:34.814 05:48:56 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:34.814 05:48:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:34.814 05:48:56 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:34.814 05:48:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.814 05:48:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:34.814 05:48:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:34.814 05:48:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:34.814 05:48:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:34.814 05:48:56 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:34.814 05:48:56 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:34.814 Cannot find device "nvmf_tgt_br" 00:08:34.814 05:48:56 -- nvmf/common.sh@154 -- # true 00:08:34.814 05:48:56 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:34.814 Cannot find device "nvmf_tgt_br2" 00:08:34.814 05:48:56 -- nvmf/common.sh@155 -- # true 00:08:34.814 05:48:56 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:34.814 05:48:56 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:34.814 Cannot find device "nvmf_tgt_br" 00:08:34.814 05:48:56 -- nvmf/common.sh@157 -- # true 00:08:34.814 05:48:56 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:34.814 Cannot find device "nvmf_tgt_br2" 00:08:34.814 05:48:56 -- nvmf/common.sh@158 -- # true 00:08:34.814 05:48:56 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:34.814 05:48:56 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:34.814 05:48:56 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:34.814 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:34.814 05:48:56 -- nvmf/common.sh@161 -- # true 00:08:34.814 05:48:56 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:34.814 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:34.814 05:48:56 -- nvmf/common.sh@162 -- # true 00:08:34.814 05:48:56 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:34.814 05:48:56 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:34.814 05:48:56 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:34.814 05:48:56 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:34.814 05:48:56 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:34.814 05:48:56 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:34.814 05:48:56 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:34.814 05:48:56 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:34.814 05:48:56 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:34.814 05:48:56 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:34.814 05:48:56 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:34.814 05:48:56 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:34.814 05:48:56 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:34.814 05:48:56 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:34.814 05:48:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:34.814 05:48:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:34.814 05:48:56 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:34.814 05:48:56 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:34.814 05:48:56 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:35.073 05:48:56 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:35.073 05:48:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:35.073 05:48:56 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:35.073 05:48:56 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:35.073 05:48:56 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:35.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:35.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:08:35.073 00:08:35.073 --- 10.0.0.2 ping statistics --- 00:08:35.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.073 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:08:35.073 05:48:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:35.073 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:35.073 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:08:35.073 00:08:35.073 --- 10.0.0.3 ping statistics --- 00:08:35.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.073 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:08:35.073 05:48:56 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:35.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:35.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:08:35.073 00:08:35.073 --- 10.0.0.1 ping statistics --- 00:08:35.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.073 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:08:35.073 05:48:56 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:35.073 05:48:56 -- nvmf/common.sh@421 -- # return 0 00:08:35.073 05:48:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:35.073 05:48:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:35.073 05:48:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:35.073 05:48:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:35.073 05:48:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:35.073 05:48:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:35.073 05:48:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:35.073 05:48:56 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:08:35.073 05:48:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:35.073 05:48:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:35.073 05:48:56 -- common/autotest_common.sh@10 -- # set +x 00:08:35.073 05:48:56 -- nvmf/common.sh@469 -- # nvmfpid=72408 00:08:35.073 05:48:56 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:35.073 05:48:56 -- nvmf/common.sh@470 -- # waitforlisten 72408 00:08:35.073 05:48:56 -- common/autotest_common.sh@829 -- # '[' -z 72408 ']' 00:08:35.073 05:48:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.073 05:48:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:35.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.074 05:48:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.074 05:48:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:35.074 05:48:56 -- common/autotest_common.sh@10 -- # set +x 00:08:35.074 [2024-12-15 05:48:56.602321] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:35.074 [2024-12-15 05:48:56.602419] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.333 [2024-12-15 05:48:56.741002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.333 [2024-12-15 05:48:56.777715] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:35.333 [2024-12-15 05:48:56.777895] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:35.333 [2024-12-15 05:48:56.777908] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:35.333 [2024-12-15 05:48:56.777917] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:35.333 [2024-12-15 05:48:56.777940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.333 05:48:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:35.333 05:48:56 -- common/autotest_common.sh@862 -- # return 0 00:08:35.333 05:48:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:35.333 05:48:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:35.333 05:48:56 -- common/autotest_common.sh@10 -- # set +x 00:08:35.333 05:48:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.333 05:48:56 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:35.592 [2024-12-15 05:48:57.165567] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.592 05:48:57 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:08:35.592 05:48:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:35.592 05:48:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:35.592 05:48:57 -- common/autotest_common.sh@10 -- # set +x 00:08:35.592 ************************************ 00:08:35.592 START TEST lvs_grow_clean 00:08:35.592 ************************************ 00:08:35.592 05:48:57 -- common/autotest_common.sh@1114 -- # lvs_grow 00:08:35.592 05:48:57 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:35.592 05:48:57 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:35.592 05:48:57 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:35.592 05:48:57 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:35.592 05:48:57 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:35.592 05:48:57 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:35.592 05:48:57 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:35.592 05:48:57 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:35.592 05:48:57 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:36.159 05:48:57 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:36.159 05:48:57 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:36.418 05:48:57 -- target/nvmf_lvs_grow.sh@28 -- # lvs=a5f920b5-6115-4917-b987-e05c279a7c3e 00:08:36.418 05:48:57 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5f920b5-6115-4917-b987-e05c279a7c3e 00:08:36.418 05:48:57 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:36.678 05:48:58 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:36.678 05:48:58 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:36.678 05:48:58 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a5f920b5-6115-4917-b987-e05c279a7c3e lvol 150 00:08:36.678 05:48:58 -- target/nvmf_lvs_grow.sh@33 -- # lvol=c18de591-f9b8-473c-8999-64e5dc291cd9 00:08:36.678 05:48:58 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:36.937 05:48:58 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:36.937 [2024-12-15 05:48:58.566025] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:36.937 [2024-12-15 05:48:58.566129] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:36.937 true 00:08:37.196 05:48:58 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5f920b5-6115-4917-b987-e05c279a7c3e 00:08:37.196 05:48:58 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:37.196 05:48:58 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:37.196 05:48:58 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:37.456 05:48:59 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c18de591-f9b8-473c-8999-64e5dc291cd9 00:08:37.714 05:48:59 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:37.972 [2024-12-15 05:48:59.474590] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.972 05:48:59 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:38.231 05:48:59 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:38.231 05:48:59 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72489 00:08:38.231 05:48:59 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:38.231 05:48:59 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72489 /var/tmp/bdevperf.sock 00:08:38.231 05:48:59 -- common/autotest_common.sh@829 -- # '[' -z 72489 ']' 00:08:38.231 05:48:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:38.231 05:48:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:38.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:38.231 05:48:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:38.231 05:48:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:38.231 05:48:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.231 [2024-12-15 05:48:59.742640] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:38.231 [2024-12-15 05:48:59.742746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72489 ] 00:08:38.490 [2024-12-15 05:48:59.877804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.490 [2024-12-15 05:48:59.915234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.099 05:49:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:39.099 05:49:00 -- common/autotest_common.sh@862 -- # return 0 00:08:39.099 05:49:00 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:39.358 Nvme0n1 00:08:39.358 05:49:00 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:39.617 [ 00:08:39.617 { 00:08:39.617 "name": "Nvme0n1", 00:08:39.617 "aliases": [ 00:08:39.617 "c18de591-f9b8-473c-8999-64e5dc291cd9" 00:08:39.617 ], 00:08:39.617 "product_name": "NVMe disk", 00:08:39.617 "block_size": 4096, 00:08:39.617 "num_blocks": 38912, 00:08:39.617 "uuid": "c18de591-f9b8-473c-8999-64e5dc291cd9", 00:08:39.617 "assigned_rate_limits": { 00:08:39.617 "rw_ios_per_sec": 0, 00:08:39.617 "rw_mbytes_per_sec": 0, 00:08:39.617 "r_mbytes_per_sec": 0, 00:08:39.617 "w_mbytes_per_sec": 0 00:08:39.617 }, 00:08:39.617 "claimed": false, 00:08:39.617 "zoned": false, 00:08:39.617 "supported_io_types": { 00:08:39.617 "read": true, 00:08:39.617 "write": true, 00:08:39.617 "unmap": true, 00:08:39.617 "write_zeroes": true, 00:08:39.617 "flush": true, 00:08:39.617 "reset": true, 00:08:39.617 "compare": true, 00:08:39.617 "compare_and_write": true, 00:08:39.617 "abort": true, 00:08:39.617 "nvme_admin": true, 00:08:39.617 "nvme_io": true 00:08:39.617 }, 00:08:39.617 "driver_specific": { 00:08:39.617 "nvme": [ 00:08:39.617 { 00:08:39.617 "trid": { 00:08:39.617 "trtype": "TCP", 00:08:39.617 "adrfam": "IPv4", 00:08:39.617 "traddr": "10.0.0.2", 00:08:39.617 "trsvcid": "4420", 00:08:39.618 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:39.618 }, 00:08:39.618 "ctrlr_data": { 00:08:39.618 "cntlid": 1, 00:08:39.618 "vendor_id": "0x8086", 00:08:39.618 "model_number": "SPDK bdev Controller", 00:08:39.618 "serial_number": "SPDK0", 00:08:39.618 "firmware_revision": "24.01.1", 00:08:39.618 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:39.618 "oacs": { 00:08:39.618 "security": 0, 00:08:39.618 "format": 0, 00:08:39.618 "firmware": 0, 00:08:39.618 "ns_manage": 0 00:08:39.618 }, 00:08:39.618 "multi_ctrlr": true, 00:08:39.618 "ana_reporting": false 00:08:39.618 }, 00:08:39.618 "vs": { 00:08:39.618 "nvme_version": "1.3" 00:08:39.618 }, 00:08:39.618 "ns_data": { 00:08:39.618 "id": 1, 00:08:39.618 "can_share": true 00:08:39.618 } 00:08:39.618 } 00:08:39.618 ], 00:08:39.618 "mp_policy": "active_passive" 00:08:39.618 } 00:08:39.618 } 00:08:39.618 ] 00:08:39.618 05:49:01 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72507 00:08:39.618 05:49:01 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:39.618 05:49:01 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:39.877 Running I/O for 10 seconds... 00:08:40.813 Latency(us) 00:08:40.813 [2024-12-15T05:49:02.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.813 [2024-12-15T05:49:02.454Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.813 Nvme0n1 : 1.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:40.813 [2024-12-15T05:49:02.454Z] =================================================================================================================== 00:08:40.813 [2024-12-15T05:49:02.454Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:40.813 00:08:41.750 05:49:03 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a5f920b5-6115-4917-b987-e05c279a7c3e 00:08:41.750 [2024-12-15T05:49:03.391Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.750 Nvme0n1 : 2.00 6540.50 25.55 0.00 0.00 0.00 0.00 0.00 00:08:41.750 [2024-12-15T05:49:03.391Z] =================================================================================================================== 00:08:41.750 [2024-12-15T05:49:03.391Z] Total : 6540.50 25.55 0.00 0.00 0.00 0.00 0.00 00:08:41.750 00:08:42.009 true 00:08:42.009 05:49:03 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:42.009 05:49:03 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5f920b5-6115-4917-b987-e05c279a7c3e 00:08:42.267 05:49:03 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:42.267 05:49:03 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:42.267 05:49:03 -- target/nvmf_lvs_grow.sh@65 -- # wait 72507 00:08:42.835 [2024-12-15T05:49:04.476Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.835 Nvme0n1 : 3.00 6519.33 25.47 0.00 0.00 0.00 0.00 0.00 00:08:42.835 [2024-12-15T05:49:04.476Z] =================================================================================================================== 00:08:42.835 [2024-12-15T05:49:04.476Z] Total : 6519.33 25.47 0.00 0.00 0.00 0.00 0.00 00:08:42.835 00:08:43.771 [2024-12-15T05:49:05.412Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.771 Nvme0n1 : 4.00 6508.75 25.42 0.00 0.00 0.00 0.00 0.00 00:08:43.771 [2024-12-15T05:49:05.412Z] =================================================================================================================== 00:08:43.771 [2024-12-15T05:49:05.412Z] Total : 6508.75 25.42 0.00 0.00 0.00 0.00 0.00 00:08:43.771 00:08:44.707 [2024-12-15T05:49:06.348Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.707 Nvme0n1 : 5.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:44.707 [2024-12-15T05:49:06.348Z] =================================================================================================================== 00:08:44.707 [2024-12-15T05:49:06.348Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:44.707 00:08:46.086 [2024-12-15T05:49:07.727Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.086 Nvme0n1 : 6.00 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:08:46.086 [2024-12-15T05:49:07.727Z] =================================================================================================================== 00:08:46.086 [2024-12-15T05:49:07.727Z] Total : 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:08:46.086 00:08:47.022 [2024-12-15T05:49:08.663Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.022 Nvme0n1 : 7.00 6422.57 25.09 0.00 0.00 0.00 0.00 0.00 00:08:47.022 [2024-12-15T05:49:08.663Z] =================================================================================================================== 00:08:47.022 [2024-12-15T05:49:08.663Z] Total : 6422.57 25.09 0.00 0.00 0.00 0.00 0.00 00:08:47.022 00:08:47.958 [2024-12-15T05:49:09.599Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.958 Nvme0n1 : 8.00 6397.62 24.99 0.00 0.00 0.00 0.00 0.00 00:08:47.958 [2024-12-15T05:49:09.599Z] =================================================================================================================== 00:08:47.958 [2024-12-15T05:49:09.599Z] Total : 6397.62 24.99 0.00 0.00 0.00 0.00 0.00 00:08:47.958 00:08:48.894 [2024-12-15T05:49:10.535Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.894 Nvme0n1 : 9.00 6364.11 24.86 0.00 0.00 0.00 0.00 0.00 00:08:48.894 [2024-12-15T05:49:10.535Z] =================================================================================================================== 00:08:48.894 [2024-12-15T05:49:10.535Z] Total : 6364.11 24.86 0.00 0.00 0.00 0.00 0.00 00:08:48.894 00:08:49.834 [2024-12-15T05:49:11.475Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.834 Nvme0n1 : 10.00 6324.60 24.71 0.00 0.00 0.00 0.00 0.00 00:08:49.834 [2024-12-15T05:49:11.475Z] =================================================================================================================== 00:08:49.834 [2024-12-15T05:49:11.475Z] Total : 6324.60 24.71 0.00 0.00 0.00 0.00 0.00 00:08:49.834 00:08:49.834 00:08:49.834 Latency(us) 00:08:49.834 [2024-12-15T05:49:11.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.834 [2024-12-15T05:49:11.475Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.834 Nvme0n1 : 10.00 6334.38 24.74 0.00 0.00 20201.97 17635.14 47662.55 00:08:49.834 [2024-12-15T05:49:11.475Z] =================================================================================================================== 00:08:49.834 [2024-12-15T05:49:11.475Z] Total : 6334.38 24.74 0.00 0.00 20201.97 17635.14 47662.55 00:08:49.834 0 00:08:49.834 05:49:11 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72489 00:08:49.834 05:49:11 -- common/autotest_common.sh@936 -- # '[' -z 72489 ']' 00:08:49.834 05:49:11 -- common/autotest_common.sh@940 -- # kill -0 72489 00:08:49.834 05:49:11 -- common/autotest_common.sh@941 -- # uname 00:08:49.834 05:49:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:49.834 05:49:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72489 00:08:49.834 05:49:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:49.834 05:49:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:49.834 killing process with pid 72489 00:08:49.834 05:49:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72489' 00:08:49.834 05:49:11 -- common/autotest_common.sh@955 -- # kill 72489 00:08:49.834 Received shutdown signal, test time was about 10.000000 seconds 00:08:49.834 00:08:49.834 Latency(us) 00:08:49.834 [2024-12-15T05:49:11.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.834 [2024-12-15T05:49:11.475Z] =================================================================================================================== 00:08:49.834 [2024-12-15T05:49:11.475Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:49.834 05:49:11 -- common/autotest_common.sh@960 -- # wait 72489 00:08:50.098 05:49:11 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:50.357 05:49:11 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5f920b5-6115-4917-b987-e05c279a7c3e 00:08:50.357 05:49:11 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:08:50.615 05:49:12 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:08:50.615 05:49:12 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:08:50.615 05:49:12 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:50.874 [2024-12-15 05:49:12.424676] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:50.874 05:49:12 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5f920b5-6115-4917-b987-e05c279a7c3e 00:08:50.874 05:49:12 -- common/autotest_common.sh@650 -- # local es=0 00:08:50.874 05:49:12 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5f920b5-6115-4917-b987-e05c279a7c3e 00:08:50.874 05:49:12 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:50.874 05:49:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.874 05:49:12 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:50.874 05:49:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.874 05:49:12 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:50.874 05:49:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.874 05:49:12 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:50.874 05:49:12 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:50.874 05:49:12 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5f920b5-6115-4917-b987-e05c279a7c3e 00:08:51.133 request: 00:08:51.133 { 00:08:51.133 "uuid": "a5f920b5-6115-4917-b987-e05c279a7c3e", 00:08:51.133 "method": "bdev_lvol_get_lvstores", 00:08:51.133 "req_id": 1 00:08:51.133 } 00:08:51.133 Got JSON-RPC error response 00:08:51.133 response: 00:08:51.133 { 00:08:51.133 "code": -19, 00:08:51.133 "message": "No such device" 00:08:51.133 } 00:08:51.133 05:49:12 -- common/autotest_common.sh@653 -- # es=1 00:08:51.133 05:49:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:51.133 05:49:12 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:51.133 05:49:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:51.133 05:49:12 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:51.392 aio_bdev 00:08:51.392 05:49:13 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev c18de591-f9b8-473c-8999-64e5dc291cd9 00:08:51.392 05:49:13 -- common/autotest_common.sh@897 -- # local bdev_name=c18de591-f9b8-473c-8999-64e5dc291cd9 00:08:51.392 05:49:13 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:51.392 05:49:13 -- common/autotest_common.sh@899 -- # local i 00:08:51.392 05:49:13 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:51.392 05:49:13 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:51.392 05:49:13 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:51.650 05:49:13 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c18de591-f9b8-473c-8999-64e5dc291cd9 -t 2000 00:08:51.909 [ 00:08:51.909 { 00:08:51.909 "name": "c18de591-f9b8-473c-8999-64e5dc291cd9", 00:08:51.909 "aliases": [ 00:08:51.909 "lvs/lvol" 00:08:51.909 ], 00:08:51.909 "product_name": "Logical Volume", 00:08:51.909 "block_size": 4096, 00:08:51.909 "num_blocks": 38912, 00:08:51.909 "uuid": "c18de591-f9b8-473c-8999-64e5dc291cd9", 00:08:51.909 "assigned_rate_limits": { 00:08:51.909 "rw_ios_per_sec": 0, 00:08:51.909 "rw_mbytes_per_sec": 0, 00:08:51.909 "r_mbytes_per_sec": 0, 00:08:51.909 "w_mbytes_per_sec": 0 00:08:51.909 }, 00:08:51.909 "claimed": false, 00:08:51.909 "zoned": false, 00:08:51.909 "supported_io_types": { 00:08:51.909 "read": true, 00:08:51.909 "write": true, 00:08:51.909 "unmap": true, 00:08:51.909 "write_zeroes": true, 00:08:51.909 "flush": false, 00:08:51.909 "reset": true, 00:08:51.909 "compare": false, 00:08:51.909 "compare_and_write": false, 00:08:51.909 "abort": false, 00:08:51.909 "nvme_admin": false, 00:08:51.909 "nvme_io": false 00:08:51.909 }, 00:08:51.909 "driver_specific": { 00:08:51.909 "lvol": { 00:08:51.909 "lvol_store_uuid": "a5f920b5-6115-4917-b987-e05c279a7c3e", 00:08:51.909 "base_bdev": "aio_bdev", 00:08:51.909 "thin_provision": false, 00:08:51.909 "snapshot": false, 00:08:51.909 "clone": false, 00:08:51.909 "esnap_clone": false 00:08:51.909 } 00:08:51.909 } 00:08:51.909 } 00:08:51.909 ] 00:08:51.909 05:49:13 -- common/autotest_common.sh@905 -- # return 0 00:08:51.909 05:49:13 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5f920b5-6115-4917-b987-e05c279a7c3e 00:08:51.909 05:49:13 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:08:52.476 05:49:13 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:08:52.476 05:49:13 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:08:52.476 05:49:13 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5f920b5-6115-4917-b987-e05c279a7c3e 00:08:52.476 05:49:14 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:08:52.476 05:49:14 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c18de591-f9b8-473c-8999-64e5dc291cd9 00:08:52.734 05:49:14 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a5f920b5-6115-4917-b987-e05c279a7c3e 00:08:52.993 05:49:14 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:53.561 05:49:14 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:53.820 00:08:53.820 real 0m18.127s 00:08:53.820 user 0m17.200s 00:08:53.820 sys 0m2.369s 00:08:53.820 ************************************ 00:08:53.820 END TEST lvs_grow_clean 00:08:53.820 ************************************ 00:08:53.820 05:49:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:53.820 05:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:53.820 05:49:15 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:53.820 05:49:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:53.820 05:49:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:53.820 05:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:53.820 ************************************ 00:08:53.820 START TEST lvs_grow_dirty 00:08:53.820 ************************************ 00:08:53.820 05:49:15 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:08:53.820 05:49:15 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:53.820 05:49:15 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:53.820 05:49:15 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:53.820 05:49:15 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:53.820 05:49:15 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:53.820 05:49:15 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:53.820 05:49:15 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:53.820 05:49:15 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:53.820 05:49:15 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:54.078 05:49:15 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:54.078 05:49:15 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:54.645 05:49:15 -- target/nvmf_lvs_grow.sh@28 -- # lvs=1d75b4fd-ebb9-4546-91ce-663155d0e58b 00:08:54.645 05:49:15 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d75b4fd-ebb9-4546-91ce-663155d0e58b 00:08:54.645 05:49:15 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:54.645 05:49:16 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:54.645 05:49:16 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:54.645 05:49:16 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1d75b4fd-ebb9-4546-91ce-663155d0e58b lvol 150 00:08:54.904 05:49:16 -- target/nvmf_lvs_grow.sh@33 -- # lvol=3e36e967-f8c4-4f23-a655-c6828fcf1645 00:08:54.904 05:49:16 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:54.904 05:49:16 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:55.163 [2024-12-15 05:49:16.747835] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:55.163 [2024-12-15 05:49:16.748230] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:55.163 true 00:08:55.163 05:49:16 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:55.163 05:49:16 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d75b4fd-ebb9-4546-91ce-663155d0e58b 00:08:55.730 05:49:17 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:55.730 05:49:17 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:55.989 05:49:17 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3e36e967-f8c4-4f23-a655-c6828fcf1645 00:08:56.248 05:49:17 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:56.248 05:49:17 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:56.817 05:49:18 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72758 00:08:56.817 05:49:18 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:56.817 05:49:18 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:56.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:56.817 05:49:18 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72758 /var/tmp/bdevperf.sock 00:08:56.817 05:49:18 -- common/autotest_common.sh@829 -- # '[' -z 72758 ']' 00:08:56.817 05:49:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:56.817 05:49:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:56.817 05:49:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:56.817 05:49:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:56.817 05:49:18 -- common/autotest_common.sh@10 -- # set +x 00:08:56.817 [2024-12-15 05:49:18.229984] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:56.817 [2024-12-15 05:49:18.230398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72758 ] 00:08:56.817 [2024-12-15 05:49:18.368997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.817 [2024-12-15 05:49:18.405375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.752 05:49:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:57.752 05:49:19 -- common/autotest_common.sh@862 -- # return 0 00:08:57.752 05:49:19 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:58.010 Nvme0n1 00:08:58.010 05:49:19 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:58.285 [ 00:08:58.285 { 00:08:58.285 "name": "Nvme0n1", 00:08:58.285 "aliases": [ 00:08:58.285 "3e36e967-f8c4-4f23-a655-c6828fcf1645" 00:08:58.285 ], 00:08:58.285 "product_name": "NVMe disk", 00:08:58.285 "block_size": 4096, 00:08:58.285 "num_blocks": 38912, 00:08:58.285 "uuid": "3e36e967-f8c4-4f23-a655-c6828fcf1645", 00:08:58.285 "assigned_rate_limits": { 00:08:58.285 "rw_ios_per_sec": 0, 00:08:58.285 "rw_mbytes_per_sec": 0, 00:08:58.285 "r_mbytes_per_sec": 0, 00:08:58.285 "w_mbytes_per_sec": 0 00:08:58.285 }, 00:08:58.285 "claimed": false, 00:08:58.285 "zoned": false, 00:08:58.285 "supported_io_types": { 00:08:58.285 "read": true, 00:08:58.285 "write": true, 00:08:58.285 "unmap": true, 00:08:58.285 "write_zeroes": true, 00:08:58.285 "flush": true, 00:08:58.285 "reset": true, 00:08:58.285 "compare": true, 00:08:58.285 "compare_and_write": true, 00:08:58.285 "abort": true, 00:08:58.285 "nvme_admin": true, 00:08:58.285 "nvme_io": true 00:08:58.285 }, 00:08:58.285 "driver_specific": { 00:08:58.285 "nvme": [ 00:08:58.285 { 00:08:58.285 "trid": { 00:08:58.285 "trtype": "TCP", 00:08:58.285 "adrfam": "IPv4", 00:08:58.285 "traddr": "10.0.0.2", 00:08:58.285 "trsvcid": "4420", 00:08:58.285 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:58.285 }, 00:08:58.285 "ctrlr_data": { 00:08:58.285 "cntlid": 1, 00:08:58.285 "vendor_id": "0x8086", 00:08:58.285 "model_number": "SPDK bdev Controller", 00:08:58.285 "serial_number": "SPDK0", 00:08:58.285 "firmware_revision": "24.01.1", 00:08:58.285 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:58.285 "oacs": { 00:08:58.285 "security": 0, 00:08:58.285 "format": 0, 00:08:58.285 "firmware": 0, 00:08:58.285 "ns_manage": 0 00:08:58.285 }, 00:08:58.285 "multi_ctrlr": true, 00:08:58.285 "ana_reporting": false 00:08:58.285 }, 00:08:58.285 "vs": { 00:08:58.285 "nvme_version": "1.3" 00:08:58.285 }, 00:08:58.285 "ns_data": { 00:08:58.285 "id": 1, 00:08:58.285 "can_share": true 00:08:58.285 } 00:08:58.285 } 00:08:58.285 ], 00:08:58.285 "mp_policy": "active_passive" 00:08:58.285 } 00:08:58.285 } 00:08:58.285 ] 00:08:58.285 05:49:19 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72781 00:08:58.285 05:49:19 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:58.285 05:49:19 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:58.553 Running I/O for 10 seconds... 00:08:59.489 Latency(us) 00:08:59.489 [2024-12-15T05:49:21.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.489 [2024-12-15T05:49:21.130Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.489 Nvme0n1 : 1.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:59.489 [2024-12-15T05:49:21.130Z] =================================================================================================================== 00:08:59.489 [2024-12-15T05:49:21.130Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:59.489 00:09:00.426 05:49:21 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1d75b4fd-ebb9-4546-91ce-663155d0e58b 00:09:00.426 [2024-12-15T05:49:22.067Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.426 Nvme0n1 : 2.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:00.426 [2024-12-15T05:49:22.067Z] =================================================================================================================== 00:09:00.426 [2024-12-15T05:49:22.067Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:00.426 00:09:00.684 true 00:09:00.684 05:49:22 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d75b4fd-ebb9-4546-91ce-663155d0e58b 00:09:00.684 05:49:22 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:00.942 05:49:22 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:00.942 05:49:22 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:00.942 05:49:22 -- target/nvmf_lvs_grow.sh@65 -- # wait 72781 00:09:01.509 [2024-12-15T05:49:23.150Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.509 Nvme0n1 : 3.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:01.509 [2024-12-15T05:49:23.150Z] =================================================================================================================== 00:09:01.509 [2024-12-15T05:49:23.150Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:01.509 00:09:02.446 [2024-12-15T05:49:24.087Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.446 Nvme0n1 : 4.00 6572.25 25.67 0.00 0.00 0.00 0.00 0.00 00:09:02.446 [2024-12-15T05:49:24.087Z] =================================================================================================================== 00:09:02.446 [2024-12-15T05:49:24.087Z] Total : 6572.25 25.67 0.00 0.00 0.00 0.00 0.00 00:09:02.446 00:09:03.382 [2024-12-15T05:49:25.023Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.382 Nvme0n1 : 5.00 6553.20 25.60 0.00 0.00 0.00 0.00 0.00 00:09:03.382 [2024-12-15T05:49:25.023Z] =================================================================================================================== 00:09:03.382 [2024-12-15T05:49:25.023Z] Total : 6553.20 25.60 0.00 0.00 0.00 0.00 0.00 00:09:03.382 00:09:04.318 [2024-12-15T05:49:25.959Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.318 Nvme0n1 : 6.00 6430.33 25.12 0.00 0.00 0.00 0.00 0.00 00:09:04.318 [2024-12-15T05:49:25.959Z] =================================================================================================================== 00:09:04.318 [2024-12-15T05:49:25.959Z] Total : 6430.33 25.12 0.00 0.00 0.00 0.00 0.00 00:09:04.318 00:09:05.695 [2024-12-15T05:49:27.336Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.695 Nvme0n1 : 7.00 6374.29 24.90 0.00 0.00 0.00 0.00 0.00 00:09:05.695 [2024-12-15T05:49:27.336Z] =================================================================================================================== 00:09:05.695 [2024-12-15T05:49:27.336Z] Total : 6374.29 24.90 0.00 0.00 0.00 0.00 0.00 00:09:05.695 00:09:06.632 [2024-12-15T05:49:28.273Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.632 Nvme0n1 : 8.00 6339.50 24.76 0.00 0.00 0.00 0.00 0.00 00:09:06.632 [2024-12-15T05:49:28.273Z] =================================================================================================================== 00:09:06.632 [2024-12-15T05:49:28.273Z] Total : 6339.50 24.76 0.00 0.00 0.00 0.00 0.00 00:09:06.632 00:09:07.568 [2024-12-15T05:49:29.209Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.568 Nvme0n1 : 9.00 6312.44 24.66 0.00 0.00 0.00 0.00 0.00 00:09:07.568 [2024-12-15T05:49:29.209Z] =================================================================================================================== 00:09:07.568 [2024-12-15T05:49:29.209Z] Total : 6312.44 24.66 0.00 0.00 0.00 0.00 0.00 00:09:07.568 00:09:08.507 [2024-12-15T05:49:30.148Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.507 Nvme0n1 : 10.00 6303.50 24.62 0.00 0.00 0.00 0.00 0.00 00:09:08.507 [2024-12-15T05:49:30.148Z] =================================================================================================================== 00:09:08.507 [2024-12-15T05:49:30.148Z] Total : 6303.50 24.62 0.00 0.00 0.00 0.00 0.00 00:09:08.507 00:09:08.507 00:09:08.507 Latency(us) 00:09:08.507 [2024-12-15T05:49:30.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.507 [2024-12-15T05:49:30.148Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.507 Nvme0n1 : 10.01 6307.84 24.64 0.00 0.00 20286.45 7387.69 98661.47 00:09:08.507 [2024-12-15T05:49:30.148Z] =================================================================================================================== 00:09:08.507 [2024-12-15T05:49:30.148Z] Total : 6307.84 24.64 0.00 0.00 20286.45 7387.69 98661.47 00:09:08.507 0 00:09:08.507 05:49:29 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72758 00:09:08.507 05:49:29 -- common/autotest_common.sh@936 -- # '[' -z 72758 ']' 00:09:08.507 05:49:29 -- common/autotest_common.sh@940 -- # kill -0 72758 00:09:08.507 05:49:29 -- common/autotest_common.sh@941 -- # uname 00:09:08.507 05:49:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:08.507 05:49:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72758 00:09:08.507 killing process with pid 72758 00:09:08.507 Received shutdown signal, test time was about 10.000000 seconds 00:09:08.507 00:09:08.507 Latency(us) 00:09:08.507 [2024-12-15T05:49:30.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.507 [2024-12-15T05:49:30.148Z] =================================================================================================================== 00:09:08.507 [2024-12-15T05:49:30.148Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:08.507 05:49:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:08.507 05:49:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:08.507 05:49:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72758' 00:09:08.507 05:49:30 -- common/autotest_common.sh@955 -- # kill 72758 00:09:08.507 05:49:30 -- common/autotest_common.sh@960 -- # wait 72758 00:09:08.765 05:49:30 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:09.025 05:49:30 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d75b4fd-ebb9-4546-91ce-663155d0e58b 00:09:09.025 05:49:30 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:09:09.283 05:49:30 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:09:09.283 05:49:30 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:09:09.283 05:49:30 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 72408 00:09:09.283 05:49:30 -- target/nvmf_lvs_grow.sh@74 -- # wait 72408 00:09:09.283 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 72408 Killed "${NVMF_APP[@]}" "$@" 00:09:09.283 05:49:30 -- target/nvmf_lvs_grow.sh@74 -- # true 00:09:09.283 05:49:30 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:09:09.283 05:49:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:09.283 05:49:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:09.283 05:49:30 -- common/autotest_common.sh@10 -- # set +x 00:09:09.283 05:49:30 -- nvmf/common.sh@469 -- # nvmfpid=72913 00:09:09.283 05:49:30 -- nvmf/common.sh@470 -- # waitforlisten 72913 00:09:09.283 05:49:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:09.283 05:49:30 -- common/autotest_common.sh@829 -- # '[' -z 72913 ']' 00:09:09.283 05:49:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.283 05:49:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:09.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.283 05:49:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.283 05:49:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:09.283 05:49:30 -- common/autotest_common.sh@10 -- # set +x 00:09:09.283 [2024-12-15 05:49:30.793412] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:09.283 [2024-12-15 05:49:30.793539] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.542 [2024-12-15 05:49:30.928126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.542 [2024-12-15 05:49:30.959238] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:09.542 [2024-12-15 05:49:30.959413] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.542 [2024-12-15 05:49:30.959426] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.542 [2024-12-15 05:49:30.959434] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.542 [2024-12-15 05:49:30.959456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.478 05:49:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:10.478 05:49:31 -- common/autotest_common.sh@862 -- # return 0 00:09:10.478 05:49:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:10.478 05:49:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:10.478 05:49:31 -- common/autotest_common.sh@10 -- # set +x 00:09:10.478 05:49:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.478 05:49:31 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:10.478 [2024-12-15 05:49:32.052733] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:10.478 [2024-12-15 05:49:32.052988] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:10.478 [2024-12-15 05:49:32.053298] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:10.478 05:49:32 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:09:10.478 05:49:32 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 3e36e967-f8c4-4f23-a655-c6828fcf1645 00:09:10.478 05:49:32 -- common/autotest_common.sh@897 -- # local bdev_name=3e36e967-f8c4-4f23-a655-c6828fcf1645 00:09:10.478 05:49:32 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:10.478 05:49:32 -- common/autotest_common.sh@899 -- # local i 00:09:10.478 05:49:32 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:10.478 05:49:32 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:10.478 05:49:32 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:10.737 05:49:32 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3e36e967-f8c4-4f23-a655-c6828fcf1645 -t 2000 00:09:10.996 [ 00:09:10.996 { 00:09:10.996 "name": "3e36e967-f8c4-4f23-a655-c6828fcf1645", 00:09:10.996 "aliases": [ 00:09:10.996 "lvs/lvol" 00:09:10.996 ], 00:09:10.996 "product_name": "Logical Volume", 00:09:10.996 "block_size": 4096, 00:09:10.996 "num_blocks": 38912, 00:09:10.996 "uuid": "3e36e967-f8c4-4f23-a655-c6828fcf1645", 00:09:10.996 "assigned_rate_limits": { 00:09:10.996 "rw_ios_per_sec": 0, 00:09:10.996 "rw_mbytes_per_sec": 0, 00:09:10.996 "r_mbytes_per_sec": 0, 00:09:10.996 "w_mbytes_per_sec": 0 00:09:10.996 }, 00:09:10.996 "claimed": false, 00:09:10.996 "zoned": false, 00:09:10.996 "supported_io_types": { 00:09:10.996 "read": true, 00:09:10.996 "write": true, 00:09:10.996 "unmap": true, 00:09:10.996 "write_zeroes": true, 00:09:10.996 "flush": false, 00:09:10.996 "reset": true, 00:09:10.996 "compare": false, 00:09:10.996 "compare_and_write": false, 00:09:10.996 "abort": false, 00:09:10.996 "nvme_admin": false, 00:09:10.996 "nvme_io": false 00:09:10.996 }, 00:09:10.996 "driver_specific": { 00:09:10.996 "lvol": { 00:09:10.996 "lvol_store_uuid": "1d75b4fd-ebb9-4546-91ce-663155d0e58b", 00:09:10.996 "base_bdev": "aio_bdev", 00:09:10.996 "thin_provision": false, 00:09:10.996 "snapshot": false, 00:09:10.996 "clone": false, 00:09:10.996 "esnap_clone": false 00:09:10.996 } 00:09:10.996 } 00:09:10.996 } 00:09:10.996 ] 00:09:11.254 05:49:32 -- common/autotest_common.sh@905 -- # return 0 00:09:11.254 05:49:32 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d75b4fd-ebb9-4546-91ce-663155d0e58b 00:09:11.254 05:49:32 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:09:11.254 05:49:32 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:09:11.254 05:49:32 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:09:11.254 05:49:32 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d75b4fd-ebb9-4546-91ce-663155d0e58b 00:09:11.513 05:49:33 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:09:11.513 05:49:33 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:11.772 [2024-12-15 05:49:33.322763] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:11.772 05:49:33 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d75b4fd-ebb9-4546-91ce-663155d0e58b 00:09:11.772 05:49:33 -- common/autotest_common.sh@650 -- # local es=0 00:09:11.772 05:49:33 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d75b4fd-ebb9-4546-91ce-663155d0e58b 00:09:11.772 05:49:33 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.772 05:49:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.772 05:49:33 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.772 05:49:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.772 05:49:33 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.772 05:49:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.772 05:49:33 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.772 05:49:33 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:11.772 05:49:33 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d75b4fd-ebb9-4546-91ce-663155d0e58b 00:09:12.032 request: 00:09:12.032 { 00:09:12.032 "uuid": "1d75b4fd-ebb9-4546-91ce-663155d0e58b", 00:09:12.032 "method": "bdev_lvol_get_lvstores", 00:09:12.032 "req_id": 1 00:09:12.032 } 00:09:12.032 Got JSON-RPC error response 00:09:12.032 response: 00:09:12.032 { 00:09:12.032 "code": -19, 00:09:12.032 "message": "No such device" 00:09:12.032 } 00:09:12.032 05:49:33 -- common/autotest_common.sh@653 -- # es=1 00:09:12.032 05:49:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:12.032 05:49:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:12.032 05:49:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:12.032 05:49:33 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:12.291 aio_bdev 00:09:12.291 05:49:33 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 3e36e967-f8c4-4f23-a655-c6828fcf1645 00:09:12.291 05:49:33 -- common/autotest_common.sh@897 -- # local bdev_name=3e36e967-f8c4-4f23-a655-c6828fcf1645 00:09:12.291 05:49:33 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:12.291 05:49:33 -- common/autotest_common.sh@899 -- # local i 00:09:12.291 05:49:33 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:12.291 05:49:33 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:12.291 05:49:33 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:12.550 05:49:34 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3e36e967-f8c4-4f23-a655-c6828fcf1645 -t 2000 00:09:12.808 [ 00:09:12.808 { 00:09:12.808 "name": "3e36e967-f8c4-4f23-a655-c6828fcf1645", 00:09:12.808 "aliases": [ 00:09:12.808 "lvs/lvol" 00:09:12.808 ], 00:09:12.808 "product_name": "Logical Volume", 00:09:12.808 "block_size": 4096, 00:09:12.808 "num_blocks": 38912, 00:09:12.808 "uuid": "3e36e967-f8c4-4f23-a655-c6828fcf1645", 00:09:12.808 "assigned_rate_limits": { 00:09:12.808 "rw_ios_per_sec": 0, 00:09:12.808 "rw_mbytes_per_sec": 0, 00:09:12.808 "r_mbytes_per_sec": 0, 00:09:12.808 "w_mbytes_per_sec": 0 00:09:12.808 }, 00:09:12.808 "claimed": false, 00:09:12.808 "zoned": false, 00:09:12.808 "supported_io_types": { 00:09:12.808 "read": true, 00:09:12.808 "write": true, 00:09:12.808 "unmap": true, 00:09:12.808 "write_zeroes": true, 00:09:12.808 "flush": false, 00:09:12.808 "reset": true, 00:09:12.808 "compare": false, 00:09:12.808 "compare_and_write": false, 00:09:12.808 "abort": false, 00:09:12.808 "nvme_admin": false, 00:09:12.808 "nvme_io": false 00:09:12.808 }, 00:09:12.808 "driver_specific": { 00:09:12.808 "lvol": { 00:09:12.808 "lvol_store_uuid": "1d75b4fd-ebb9-4546-91ce-663155d0e58b", 00:09:12.808 "base_bdev": "aio_bdev", 00:09:12.808 "thin_provision": false, 00:09:12.808 "snapshot": false, 00:09:12.808 "clone": false, 00:09:12.808 "esnap_clone": false 00:09:12.808 } 00:09:12.808 } 00:09:12.808 } 00:09:12.808 ] 00:09:12.808 05:49:34 -- common/autotest_common.sh@905 -- # return 0 00:09:12.808 05:49:34 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d75b4fd-ebb9-4546-91ce-663155d0e58b 00:09:12.808 05:49:34 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:09:13.067 05:49:34 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:09:13.067 05:49:34 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d75b4fd-ebb9-4546-91ce-663155d0e58b 00:09:13.067 05:49:34 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:09:13.326 05:49:34 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:09:13.326 05:49:34 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 3e36e967-f8c4-4f23-a655-c6828fcf1645 00:09:13.585 05:49:35 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1d75b4fd-ebb9-4546-91ce-663155d0e58b 00:09:13.896 05:49:35 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:14.166 05:49:35 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:14.425 ************************************ 00:09:14.425 END TEST lvs_grow_dirty 00:09:14.425 ************************************ 00:09:14.425 00:09:14.425 real 0m20.586s 00:09:14.425 user 0m40.916s 00:09:14.425 sys 0m9.456s 00:09:14.425 05:49:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:14.425 05:49:35 -- common/autotest_common.sh@10 -- # set +x 00:09:14.425 05:49:36 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:14.425 05:49:36 -- common/autotest_common.sh@806 -- # type=--id 00:09:14.425 05:49:36 -- common/autotest_common.sh@807 -- # id=0 00:09:14.425 05:49:36 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:09:14.425 05:49:36 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:14.425 05:49:36 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:09:14.425 05:49:36 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:09:14.425 05:49:36 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:09:14.425 05:49:36 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:14.425 nvmf_trace.0 00:09:14.425 05:49:36 -- common/autotest_common.sh@821 -- # return 0 00:09:14.425 05:49:36 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:14.425 05:49:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:14.425 05:49:36 -- nvmf/common.sh@116 -- # sync 00:09:14.684 05:49:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:14.684 05:49:36 -- nvmf/common.sh@119 -- # set +e 00:09:14.684 05:49:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:14.684 05:49:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:14.684 rmmod nvme_tcp 00:09:14.684 rmmod nvme_fabrics 00:09:14.684 rmmod nvme_keyring 00:09:14.684 05:49:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:14.684 05:49:36 -- nvmf/common.sh@123 -- # set -e 00:09:14.684 05:49:36 -- nvmf/common.sh@124 -- # return 0 00:09:14.684 05:49:36 -- nvmf/common.sh@477 -- # '[' -n 72913 ']' 00:09:14.684 05:49:36 -- nvmf/common.sh@478 -- # killprocess 72913 00:09:14.684 05:49:36 -- common/autotest_common.sh@936 -- # '[' -z 72913 ']' 00:09:14.684 05:49:36 -- common/autotest_common.sh@940 -- # kill -0 72913 00:09:14.684 05:49:36 -- common/autotest_common.sh@941 -- # uname 00:09:14.684 05:49:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:14.684 05:49:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72913 00:09:14.684 killing process with pid 72913 00:09:14.684 05:49:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:14.684 05:49:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:14.684 05:49:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72913' 00:09:14.684 05:49:36 -- common/autotest_common.sh@955 -- # kill 72913 00:09:14.684 05:49:36 -- common/autotest_common.sh@960 -- # wait 72913 00:09:14.943 05:49:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:14.943 05:49:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:14.943 05:49:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:14.943 05:49:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:14.943 05:49:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:14.943 05:49:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.943 05:49:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:14.943 05:49:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.943 05:49:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:14.943 00:09:14.943 real 0m40.432s 00:09:14.943 user 1m4.338s 00:09:14.943 sys 0m12.386s 00:09:14.943 05:49:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:14.943 05:49:36 -- common/autotest_common.sh@10 -- # set +x 00:09:14.943 ************************************ 00:09:14.943 END TEST nvmf_lvs_grow 00:09:14.943 ************************************ 00:09:14.943 05:49:36 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:14.943 05:49:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:14.943 05:49:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:14.943 05:49:36 -- common/autotest_common.sh@10 -- # set +x 00:09:14.943 ************************************ 00:09:14.943 START TEST nvmf_bdev_io_wait 00:09:14.943 ************************************ 00:09:14.943 05:49:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:15.203 * Looking for test storage... 00:09:15.203 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:15.203 05:49:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:15.203 05:49:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:15.203 05:49:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:15.203 05:49:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:15.203 05:49:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:15.203 05:49:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:15.203 05:49:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:15.203 05:49:36 -- scripts/common.sh@335 -- # IFS=.-: 00:09:15.203 05:49:36 -- scripts/common.sh@335 -- # read -ra ver1 00:09:15.203 05:49:36 -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.203 05:49:36 -- scripts/common.sh@336 -- # read -ra ver2 00:09:15.203 05:49:36 -- scripts/common.sh@337 -- # local 'op=<' 00:09:15.203 05:49:36 -- scripts/common.sh@339 -- # ver1_l=2 00:09:15.203 05:49:36 -- scripts/common.sh@340 -- # ver2_l=1 00:09:15.203 05:49:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:15.203 05:49:36 -- scripts/common.sh@343 -- # case "$op" in 00:09:15.203 05:49:36 -- scripts/common.sh@344 -- # : 1 00:09:15.203 05:49:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:15.203 05:49:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.203 05:49:36 -- scripts/common.sh@364 -- # decimal 1 00:09:15.203 05:49:36 -- scripts/common.sh@352 -- # local d=1 00:09:15.203 05:49:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.203 05:49:36 -- scripts/common.sh@354 -- # echo 1 00:09:15.203 05:49:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:15.203 05:49:36 -- scripts/common.sh@365 -- # decimal 2 00:09:15.203 05:49:36 -- scripts/common.sh@352 -- # local d=2 00:09:15.203 05:49:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.203 05:49:36 -- scripts/common.sh@354 -- # echo 2 00:09:15.203 05:49:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:15.203 05:49:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:15.203 05:49:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:15.203 05:49:36 -- scripts/common.sh@367 -- # return 0 00:09:15.203 05:49:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.203 05:49:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:15.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.203 --rc genhtml_branch_coverage=1 00:09:15.203 --rc genhtml_function_coverage=1 00:09:15.203 --rc genhtml_legend=1 00:09:15.203 --rc geninfo_all_blocks=1 00:09:15.203 --rc geninfo_unexecuted_blocks=1 00:09:15.203 00:09:15.203 ' 00:09:15.203 05:49:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:15.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.203 --rc genhtml_branch_coverage=1 00:09:15.203 --rc genhtml_function_coverage=1 00:09:15.203 --rc genhtml_legend=1 00:09:15.203 --rc geninfo_all_blocks=1 00:09:15.203 --rc geninfo_unexecuted_blocks=1 00:09:15.203 00:09:15.203 ' 00:09:15.203 05:49:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:15.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.203 --rc genhtml_branch_coverage=1 00:09:15.203 --rc genhtml_function_coverage=1 00:09:15.203 --rc genhtml_legend=1 00:09:15.203 --rc geninfo_all_blocks=1 00:09:15.203 --rc geninfo_unexecuted_blocks=1 00:09:15.203 00:09:15.203 ' 00:09:15.203 05:49:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:15.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.203 --rc genhtml_branch_coverage=1 00:09:15.203 --rc genhtml_function_coverage=1 00:09:15.203 --rc genhtml_legend=1 00:09:15.203 --rc geninfo_all_blocks=1 00:09:15.203 --rc geninfo_unexecuted_blocks=1 00:09:15.203 00:09:15.203 ' 00:09:15.203 05:49:36 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:15.203 05:49:36 -- nvmf/common.sh@7 -- # uname -s 00:09:15.203 05:49:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:15.203 05:49:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:15.203 05:49:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:15.203 05:49:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:15.203 05:49:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:15.203 05:49:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:15.203 05:49:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:15.203 05:49:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:15.203 05:49:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:15.203 05:49:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:15.203 05:49:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:09:15.203 05:49:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:09:15.203 05:49:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:15.203 05:49:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:15.203 05:49:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:15.203 05:49:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:15.203 05:49:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.203 05:49:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.203 05:49:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.203 05:49:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.203 05:49:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.203 05:49:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.203 05:49:36 -- paths/export.sh@5 -- # export PATH 00:09:15.204 05:49:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.204 05:49:36 -- nvmf/common.sh@46 -- # : 0 00:09:15.204 05:49:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:15.204 05:49:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:15.204 05:49:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:15.204 05:49:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:15.204 05:49:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:15.204 05:49:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:15.204 05:49:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:15.204 05:49:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:15.204 05:49:36 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:15.204 05:49:36 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:15.204 05:49:36 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:15.204 05:49:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:15.204 05:49:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:15.204 05:49:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:15.204 05:49:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:15.204 05:49:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:15.204 05:49:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.204 05:49:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:15.204 05:49:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.204 05:49:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:15.204 05:49:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:15.204 05:49:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:15.204 05:49:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:15.204 05:49:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:15.204 05:49:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:15.204 05:49:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:15.204 05:49:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:15.204 05:49:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:15.204 05:49:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:15.204 05:49:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:15.204 05:49:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:15.204 05:49:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:15.204 05:49:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:15.204 05:49:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:15.204 05:49:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:15.204 05:49:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:15.204 05:49:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:15.204 05:49:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:15.204 05:49:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:15.204 Cannot find device "nvmf_tgt_br" 00:09:15.204 05:49:36 -- nvmf/common.sh@154 -- # true 00:09:15.204 05:49:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:15.204 Cannot find device "nvmf_tgt_br2" 00:09:15.204 05:49:36 -- nvmf/common.sh@155 -- # true 00:09:15.204 05:49:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:15.204 05:49:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:15.204 Cannot find device "nvmf_tgt_br" 00:09:15.204 05:49:36 -- nvmf/common.sh@157 -- # true 00:09:15.204 05:49:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:15.204 Cannot find device "nvmf_tgt_br2" 00:09:15.204 05:49:36 -- nvmf/common.sh@158 -- # true 00:09:15.204 05:49:36 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:15.463 05:49:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:15.463 05:49:36 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:15.463 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:15.463 05:49:36 -- nvmf/common.sh@161 -- # true 00:09:15.463 05:49:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:15.463 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:15.463 05:49:36 -- nvmf/common.sh@162 -- # true 00:09:15.463 05:49:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:15.463 05:49:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:15.463 05:49:36 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:15.463 05:49:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:15.463 05:49:36 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:15.463 05:49:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:15.463 05:49:36 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:15.463 05:49:36 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:15.463 05:49:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:15.463 05:49:36 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:15.463 05:49:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:15.463 05:49:36 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:15.463 05:49:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:15.463 05:49:36 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:15.463 05:49:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:15.463 05:49:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:15.463 05:49:36 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:15.463 05:49:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:15.463 05:49:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:15.463 05:49:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:15.463 05:49:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:15.463 05:49:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:15.463 05:49:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:15.463 05:49:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:15.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:15.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:09:15.463 00:09:15.463 --- 10.0.0.2 ping statistics --- 00:09:15.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.463 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:15.463 05:49:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:15.463 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:15.463 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:09:15.463 00:09:15.463 --- 10.0.0.3 ping statistics --- 00:09:15.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.463 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:09:15.463 05:49:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:15.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:15.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:09:15.463 00:09:15.463 --- 10.0.0.1 ping statistics --- 00:09:15.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.463 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:09:15.463 05:49:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:15.463 05:49:37 -- nvmf/common.sh@421 -- # return 0 00:09:15.463 05:49:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:15.463 05:49:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:15.463 05:49:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:15.463 05:49:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:15.463 05:49:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:15.463 05:49:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:15.463 05:49:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:15.463 05:49:37 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:15.463 05:49:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:15.463 05:49:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:15.463 05:49:37 -- common/autotest_common.sh@10 -- # set +x 00:09:15.463 05:49:37 -- nvmf/common.sh@469 -- # nvmfpid=73232 00:09:15.463 05:49:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:15.463 05:49:37 -- nvmf/common.sh@470 -- # waitforlisten 73232 00:09:15.463 05:49:37 -- common/autotest_common.sh@829 -- # '[' -z 73232 ']' 00:09:15.463 05:49:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.463 05:49:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:15.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.463 05:49:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.463 05:49:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:15.463 05:49:37 -- common/autotest_common.sh@10 -- # set +x 00:09:15.722 [2024-12-15 05:49:37.145007] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:15.722 [2024-12-15 05:49:37.145331] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.722 [2024-12-15 05:49:37.286827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:15.722 [2024-12-15 05:49:37.323337] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:15.722 [2024-12-15 05:49:37.323506] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.722 [2024-12-15 05:49:37.323520] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.722 [2024-12-15 05:49:37.323529] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.722 [2024-12-15 05:49:37.323648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.722 [2024-12-15 05:49:37.323787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.722 [2024-12-15 05:49:37.324057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:15.722 [2024-12-15 05:49:37.324063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.981 05:49:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:15.981 05:49:37 -- common/autotest_common.sh@862 -- # return 0 00:09:15.981 05:49:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:15.981 05:49:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:15.981 05:49:37 -- common/autotest_common.sh@10 -- # set +x 00:09:15.981 05:49:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.981 05:49:37 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:15.981 05:49:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.981 05:49:37 -- common/autotest_common.sh@10 -- # set +x 00:09:15.981 05:49:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.981 05:49:37 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:15.981 05:49:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.981 05:49:37 -- common/autotest_common.sh@10 -- # set +x 00:09:15.981 05:49:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.981 05:49:37 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:15.981 05:49:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.981 05:49:37 -- common/autotest_common.sh@10 -- # set +x 00:09:15.981 [2024-12-15 05:49:37.471831] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.981 05:49:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.981 05:49:37 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:15.981 05:49:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.981 05:49:37 -- common/autotest_common.sh@10 -- # set +x 00:09:15.981 Malloc0 00:09:15.981 05:49:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.981 05:49:37 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:15.981 05:49:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.981 05:49:37 -- common/autotest_common.sh@10 -- # set +x 00:09:15.981 05:49:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.981 05:49:37 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:15.981 05:49:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.981 05:49:37 -- common/autotest_common.sh@10 -- # set +x 00:09:15.981 05:49:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.981 05:49:37 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:15.981 05:49:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.981 05:49:37 -- common/autotest_common.sh@10 -- # set +x 00:09:15.981 [2024-12-15 05:49:37.526919] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.981 05:49:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.981 05:49:37 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=73260 00:09:15.981 05:49:37 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:15.981 05:49:37 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:15.981 05:49:37 -- target/bdev_io_wait.sh@30 -- # READ_PID=73262 00:09:15.981 05:49:37 -- nvmf/common.sh@520 -- # config=() 00:09:15.981 05:49:37 -- nvmf/common.sh@520 -- # local subsystem config 00:09:15.981 05:49:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:15.981 05:49:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:15.981 { 00:09:15.981 "params": { 00:09:15.981 "name": "Nvme$subsystem", 00:09:15.981 "trtype": "$TEST_TRANSPORT", 00:09:15.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:15.981 "adrfam": "ipv4", 00:09:15.981 "trsvcid": "$NVMF_PORT", 00:09:15.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:15.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:15.981 "hdgst": ${hdgst:-false}, 00:09:15.981 "ddgst": ${ddgst:-false} 00:09:15.981 }, 00:09:15.981 "method": "bdev_nvme_attach_controller" 00:09:15.981 } 00:09:15.981 EOF 00:09:15.981 )") 00:09:15.981 05:49:37 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=73263 00:09:15.981 05:49:37 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:15.981 05:49:37 -- nvmf/common.sh@520 -- # config=() 00:09:15.981 05:49:37 -- nvmf/common.sh@542 -- # cat 00:09:15.981 05:49:37 -- nvmf/common.sh@520 -- # local subsystem config 00:09:15.981 05:49:37 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:15.981 05:49:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:15.981 05:49:37 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=73267 00:09:15.981 05:49:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:15.981 { 00:09:15.981 "params": { 00:09:15.981 "name": "Nvme$subsystem", 00:09:15.981 "trtype": "$TEST_TRANSPORT", 00:09:15.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:15.981 "adrfam": "ipv4", 00:09:15.981 "trsvcid": "$NVMF_PORT", 00:09:15.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:15.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:15.981 "hdgst": ${hdgst:-false}, 00:09:15.981 "ddgst": ${ddgst:-false} 00:09:15.981 }, 00:09:15.981 "method": "bdev_nvme_attach_controller" 00:09:15.981 } 00:09:15.981 EOF 00:09:15.981 )") 00:09:15.981 05:49:37 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:15.981 05:49:37 -- target/bdev_io_wait.sh@35 -- # sync 00:09:15.981 05:49:37 -- nvmf/common.sh@542 -- # cat 00:09:15.981 05:49:37 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:15.981 05:49:37 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:15.981 05:49:37 -- nvmf/common.sh@520 -- # config=() 00:09:15.981 05:49:37 -- nvmf/common.sh@520 -- # local subsystem config 00:09:15.981 05:49:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:15.981 05:49:37 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:15.981 05:49:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:15.981 { 00:09:15.982 "params": { 00:09:15.982 "name": "Nvme$subsystem", 00:09:15.982 "trtype": "$TEST_TRANSPORT", 00:09:15.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:15.982 "adrfam": "ipv4", 00:09:15.982 "trsvcid": "$NVMF_PORT", 00:09:15.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:15.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:15.982 "hdgst": ${hdgst:-false}, 00:09:15.982 "ddgst": ${ddgst:-false} 00:09:15.982 }, 00:09:15.982 "method": "bdev_nvme_attach_controller" 00:09:15.982 } 00:09:15.982 EOF 00:09:15.982 )") 00:09:15.982 05:49:37 -- nvmf/common.sh@520 -- # config=() 00:09:15.982 05:49:37 -- nvmf/common.sh@520 -- # local subsystem config 00:09:15.982 05:49:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:15.982 05:49:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:15.982 { 00:09:15.982 "params": { 00:09:15.982 "name": "Nvme$subsystem", 00:09:15.982 "trtype": "$TEST_TRANSPORT", 00:09:15.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:15.982 "adrfam": "ipv4", 00:09:15.982 "trsvcid": "$NVMF_PORT", 00:09:15.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:15.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:15.982 "hdgst": ${hdgst:-false}, 00:09:15.982 "ddgst": ${ddgst:-false} 00:09:15.982 }, 00:09:15.982 "method": "bdev_nvme_attach_controller" 00:09:15.982 } 00:09:15.982 EOF 00:09:15.982 )") 00:09:15.982 05:49:37 -- nvmf/common.sh@542 -- # cat 00:09:15.982 05:49:37 -- nvmf/common.sh@544 -- # jq . 00:09:15.982 05:49:37 -- nvmf/common.sh@544 -- # jq . 00:09:15.982 05:49:37 -- nvmf/common.sh@542 -- # cat 00:09:15.982 05:49:37 -- nvmf/common.sh@545 -- # IFS=, 00:09:15.982 05:49:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:15.982 "params": { 00:09:15.982 "name": "Nvme1", 00:09:15.982 "trtype": "tcp", 00:09:15.982 "traddr": "10.0.0.2", 00:09:15.982 "adrfam": "ipv4", 00:09:15.982 "trsvcid": "4420", 00:09:15.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:15.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:15.982 "hdgst": false, 00:09:15.982 "ddgst": false 00:09:15.982 }, 00:09:15.982 "method": "bdev_nvme_attach_controller" 00:09:15.982 }' 00:09:15.982 05:49:37 -- nvmf/common.sh@544 -- # jq . 00:09:15.982 05:49:37 -- nvmf/common.sh@545 -- # IFS=, 00:09:15.982 05:49:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:15.982 "params": { 00:09:15.982 "name": "Nvme1", 00:09:15.982 "trtype": "tcp", 00:09:15.982 "traddr": "10.0.0.2", 00:09:15.982 "adrfam": "ipv4", 00:09:15.982 "trsvcid": "4420", 00:09:15.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:15.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:15.982 "hdgst": false, 00:09:15.982 "ddgst": false 00:09:15.982 }, 00:09:15.982 "method": "bdev_nvme_attach_controller" 00:09:15.982 }' 00:09:15.982 05:49:37 -- nvmf/common.sh@545 -- # IFS=, 00:09:15.982 05:49:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:15.982 "params": { 00:09:15.982 "name": "Nvme1", 00:09:15.982 "trtype": "tcp", 00:09:15.982 "traddr": "10.0.0.2", 00:09:15.982 "adrfam": "ipv4", 00:09:15.982 "trsvcid": "4420", 00:09:15.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:15.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:15.982 "hdgst": false, 00:09:15.982 "ddgst": false 00:09:15.982 }, 00:09:15.982 "method": "bdev_nvme_attach_controller" 00:09:15.982 }' 00:09:15.982 05:49:37 -- nvmf/common.sh@544 -- # jq . 00:09:15.982 05:49:37 -- nvmf/common.sh@545 -- # IFS=, 00:09:15.982 05:49:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:15.982 "params": { 00:09:15.982 "name": "Nvme1", 00:09:15.982 "trtype": "tcp", 00:09:15.982 "traddr": "10.0.0.2", 00:09:15.982 "adrfam": "ipv4", 00:09:15.982 "trsvcid": "4420", 00:09:15.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:15.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:15.982 "hdgst": false, 00:09:15.982 "ddgst": false 00:09:15.982 }, 00:09:15.982 "method": "bdev_nvme_attach_controller" 00:09:15.982 }' 00:09:15.982 [2024-12-15 05:49:37.586823] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:15.982 [2024-12-15 05:49:37.587089] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:15.982 [2024-12-15 05:49:37.589378] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:15.982 [2024-12-15 05:49:37.589779] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:15.982 05:49:37 -- target/bdev_io_wait.sh@37 -- # wait 73260 00:09:15.982 [2024-12-15 05:49:37.604894] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:15.982 [2024-12-15 05:49:37.604965] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:15.982 [2024-12-15 05:49:37.606503] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:15.982 [2024-12-15 05:49:37.606737] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:16.241 [2024-12-15 05:49:37.771110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.241 [2024-12-15 05:49:37.796397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:09:16.241 [2024-12-15 05:49:37.812209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.241 [2024-12-15 05:49:37.836995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:09:16.241 [2024-12-15 05:49:37.863021] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.499 [2024-12-15 05:49:37.888765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:16.499 [2024-12-15 05:49:37.913810] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.499 Running I/O for 1 seconds... 00:09:16.499 [2024-12-15 05:49:37.937966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:09:16.499 Running I/O for 1 seconds... 00:09:16.499 Running I/O for 1 seconds... 00:09:16.499 Running I/O for 1 seconds... 00:09:17.436 00:09:17.436 Latency(us) 00:09:17.436 [2024-12-15T05:49:39.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:17.436 [2024-12-15T05:49:39.077Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:17.436 Nvme1n1 : 1.00 168370.13 657.70 0.00 0.00 757.37 335.13 1184.12 00:09:17.436 [2024-12-15T05:49:39.077Z] =================================================================================================================== 00:09:17.436 [2024-12-15T05:49:39.077Z] Total : 168370.13 657.70 0.00 0.00 757.37 335.13 1184.12 00:09:17.436 00:09:17.436 Latency(us) 00:09:17.436 [2024-12-15T05:49:39.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:17.436 [2024-12-15T05:49:39.077Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:17.436 Nvme1n1 : 1.01 10492.93 40.99 0.00 0.00 12145.77 7804.74 19303.33 00:09:17.436 [2024-12-15T05:49:39.077Z] =================================================================================================================== 00:09:17.436 [2024-12-15T05:49:39.077Z] Total : 10492.93 40.99 0.00 0.00 12145.77 7804.74 19303.33 00:09:17.436 00:09:17.436 Latency(us) 00:09:17.436 [2024-12-15T05:49:39.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:17.436 [2024-12-15T05:49:39.077Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:17.436 Nvme1n1 : 1.01 8480.45 33.13 0.00 0.00 15022.62 8400.52 27405.96 00:09:17.436 [2024-12-15T05:49:39.077Z] =================================================================================================================== 00:09:17.436 [2024-12-15T05:49:39.077Z] Total : 8480.45 33.13 0.00 0.00 15022.62 8400.52 27405.96 00:09:17.695 00:09:17.695 Latency(us) 00:09:17.695 [2024-12-15T05:49:39.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:17.695 [2024-12-15T05:49:39.336Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:17.695 Nvme1n1 : 1.01 8651.41 33.79 0.00 0.00 14732.60 7626.01 26691.03 00:09:17.695 [2024-12-15T05:49:39.336Z] =================================================================================================================== 00:09:17.695 [2024-12-15T05:49:39.336Z] Total : 8651.41 33.79 0.00 0.00 14732.60 7626.01 26691.03 00:09:17.695 05:49:39 -- target/bdev_io_wait.sh@38 -- # wait 73262 00:09:17.695 05:49:39 -- target/bdev_io_wait.sh@39 -- # wait 73263 00:09:17.695 05:49:39 -- target/bdev_io_wait.sh@40 -- # wait 73267 00:09:17.695 05:49:39 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:17.695 05:49:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.695 05:49:39 -- common/autotest_common.sh@10 -- # set +x 00:09:17.695 05:49:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.695 05:49:39 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:17.695 05:49:39 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:17.695 05:49:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:17.695 05:49:39 -- nvmf/common.sh@116 -- # sync 00:09:17.695 05:49:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:17.695 05:49:39 -- nvmf/common.sh@119 -- # set +e 00:09:17.695 05:49:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:17.695 05:49:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:17.695 rmmod nvme_tcp 00:09:17.695 rmmod nvme_fabrics 00:09:17.695 rmmod nvme_keyring 00:09:17.695 05:49:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:17.695 05:49:39 -- nvmf/common.sh@123 -- # set -e 00:09:17.695 05:49:39 -- nvmf/common.sh@124 -- # return 0 00:09:17.695 05:49:39 -- nvmf/common.sh@477 -- # '[' -n 73232 ']' 00:09:17.695 05:49:39 -- nvmf/common.sh@478 -- # killprocess 73232 00:09:17.695 05:49:39 -- common/autotest_common.sh@936 -- # '[' -z 73232 ']' 00:09:17.695 05:49:39 -- common/autotest_common.sh@940 -- # kill -0 73232 00:09:17.695 05:49:39 -- common/autotest_common.sh@941 -- # uname 00:09:17.954 05:49:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:17.954 05:49:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73232 00:09:17.954 killing process with pid 73232 00:09:17.954 05:49:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:17.954 05:49:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:17.954 05:49:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73232' 00:09:17.954 05:49:39 -- common/autotest_common.sh@955 -- # kill 73232 00:09:17.954 05:49:39 -- common/autotest_common.sh@960 -- # wait 73232 00:09:17.954 05:49:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:17.954 05:49:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:17.954 05:49:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:17.954 05:49:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:17.954 05:49:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:17.954 05:49:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.954 05:49:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:17.954 05:49:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.954 05:49:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:17.954 ************************************ 00:09:17.954 END TEST nvmf_bdev_io_wait 00:09:17.954 ************************************ 00:09:17.954 00:09:17.954 real 0m3.011s 00:09:17.954 user 0m12.732s 00:09:17.954 sys 0m1.965s 00:09:17.954 05:49:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:17.954 05:49:39 -- common/autotest_common.sh@10 -- # set +x 00:09:17.954 05:49:39 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:17.954 05:49:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:17.954 05:49:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:17.954 05:49:39 -- common/autotest_common.sh@10 -- # set +x 00:09:17.954 ************************************ 00:09:17.954 START TEST nvmf_queue_depth 00:09:17.954 ************************************ 00:09:17.954 05:49:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:18.213 * Looking for test storage... 00:09:18.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:18.213 05:49:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:18.213 05:49:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:18.214 05:49:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:18.214 05:49:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:18.214 05:49:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:18.214 05:49:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:18.214 05:49:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:18.214 05:49:39 -- scripts/common.sh@335 -- # IFS=.-: 00:09:18.214 05:49:39 -- scripts/common.sh@335 -- # read -ra ver1 00:09:18.214 05:49:39 -- scripts/common.sh@336 -- # IFS=.-: 00:09:18.214 05:49:39 -- scripts/common.sh@336 -- # read -ra ver2 00:09:18.214 05:49:39 -- scripts/common.sh@337 -- # local 'op=<' 00:09:18.214 05:49:39 -- scripts/common.sh@339 -- # ver1_l=2 00:09:18.214 05:49:39 -- scripts/common.sh@340 -- # ver2_l=1 00:09:18.214 05:49:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:18.214 05:49:39 -- scripts/common.sh@343 -- # case "$op" in 00:09:18.214 05:49:39 -- scripts/common.sh@344 -- # : 1 00:09:18.214 05:49:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:18.214 05:49:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:18.214 05:49:39 -- scripts/common.sh@364 -- # decimal 1 00:09:18.214 05:49:39 -- scripts/common.sh@352 -- # local d=1 00:09:18.214 05:49:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:18.214 05:49:39 -- scripts/common.sh@354 -- # echo 1 00:09:18.214 05:49:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:18.214 05:49:39 -- scripts/common.sh@365 -- # decimal 2 00:09:18.214 05:49:39 -- scripts/common.sh@352 -- # local d=2 00:09:18.214 05:49:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:18.214 05:49:39 -- scripts/common.sh@354 -- # echo 2 00:09:18.214 05:49:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:18.214 05:49:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:18.214 05:49:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:18.214 05:49:39 -- scripts/common.sh@367 -- # return 0 00:09:18.214 05:49:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:18.214 05:49:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:18.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.214 --rc genhtml_branch_coverage=1 00:09:18.214 --rc genhtml_function_coverage=1 00:09:18.214 --rc genhtml_legend=1 00:09:18.214 --rc geninfo_all_blocks=1 00:09:18.214 --rc geninfo_unexecuted_blocks=1 00:09:18.214 00:09:18.214 ' 00:09:18.214 05:49:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:18.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.214 --rc genhtml_branch_coverage=1 00:09:18.214 --rc genhtml_function_coverage=1 00:09:18.214 --rc genhtml_legend=1 00:09:18.214 --rc geninfo_all_blocks=1 00:09:18.214 --rc geninfo_unexecuted_blocks=1 00:09:18.214 00:09:18.214 ' 00:09:18.214 05:49:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:18.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.214 --rc genhtml_branch_coverage=1 00:09:18.214 --rc genhtml_function_coverage=1 00:09:18.214 --rc genhtml_legend=1 00:09:18.214 --rc geninfo_all_blocks=1 00:09:18.214 --rc geninfo_unexecuted_blocks=1 00:09:18.214 00:09:18.214 ' 00:09:18.214 05:49:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:18.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.214 --rc genhtml_branch_coverage=1 00:09:18.214 --rc genhtml_function_coverage=1 00:09:18.214 --rc genhtml_legend=1 00:09:18.214 --rc geninfo_all_blocks=1 00:09:18.214 --rc geninfo_unexecuted_blocks=1 00:09:18.214 00:09:18.214 ' 00:09:18.214 05:49:39 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:18.214 05:49:39 -- nvmf/common.sh@7 -- # uname -s 00:09:18.214 05:49:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.214 05:49:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.214 05:49:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.214 05:49:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.214 05:49:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.214 05:49:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.214 05:49:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.214 05:49:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.214 05:49:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.214 05:49:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.214 05:49:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:09:18.214 05:49:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:09:18.214 05:49:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.214 05:49:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.214 05:49:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:18.214 05:49:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:18.214 05:49:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.214 05:49:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.214 05:49:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.214 05:49:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.214 05:49:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.214 05:49:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.214 05:49:39 -- paths/export.sh@5 -- # export PATH 00:09:18.214 05:49:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.214 05:49:39 -- nvmf/common.sh@46 -- # : 0 00:09:18.214 05:49:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:18.214 05:49:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:18.214 05:49:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:18.214 05:49:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.214 05:49:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.214 05:49:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:18.214 05:49:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:18.214 05:49:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:18.214 05:49:39 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:18.214 05:49:39 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:18.214 05:49:39 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:18.214 05:49:39 -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:18.214 05:49:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:18.214 05:49:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:18.214 05:49:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:18.214 05:49:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:18.214 05:49:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:18.214 05:49:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.214 05:49:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.214 05:49:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.214 05:49:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:18.214 05:49:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:18.214 05:49:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:18.214 05:49:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:18.214 05:49:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:18.214 05:49:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:18.214 05:49:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:18.214 05:49:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:18.214 05:49:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:18.214 05:49:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:18.214 05:49:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:18.214 05:49:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:18.214 05:49:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:18.214 05:49:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:18.214 05:49:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:18.214 05:49:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:18.214 05:49:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:18.214 05:49:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:18.214 05:49:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:18.214 05:49:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:18.215 Cannot find device "nvmf_tgt_br" 00:09:18.215 05:49:39 -- nvmf/common.sh@154 -- # true 00:09:18.215 05:49:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:18.215 Cannot find device "nvmf_tgt_br2" 00:09:18.215 05:49:39 -- nvmf/common.sh@155 -- # true 00:09:18.215 05:49:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:18.215 05:49:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:18.215 Cannot find device "nvmf_tgt_br" 00:09:18.215 05:49:39 -- nvmf/common.sh@157 -- # true 00:09:18.215 05:49:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:18.215 Cannot find device "nvmf_tgt_br2" 00:09:18.215 05:49:39 -- nvmf/common.sh@158 -- # true 00:09:18.215 05:49:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:18.473 05:49:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:18.474 05:49:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:18.474 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:18.474 05:49:39 -- nvmf/common.sh@161 -- # true 00:09:18.474 05:49:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:18.474 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:18.474 05:49:39 -- nvmf/common.sh@162 -- # true 00:09:18.474 05:49:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:18.474 05:49:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:18.474 05:49:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:18.474 05:49:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:18.474 05:49:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:18.474 05:49:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:18.474 05:49:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:18.474 05:49:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:18.474 05:49:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:18.474 05:49:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:18.474 05:49:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:18.474 05:49:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:18.474 05:49:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:18.474 05:49:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:18.474 05:49:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:18.474 05:49:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:18.474 05:49:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:18.474 05:49:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:18.474 05:49:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:18.474 05:49:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:18.474 05:49:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:18.474 05:49:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:18.474 05:49:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:18.474 05:49:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:18.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:18.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:09:18.474 00:09:18.474 --- 10.0.0.2 ping statistics --- 00:09:18.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.474 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:18.474 05:49:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:18.474 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:18.474 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:09:18.474 00:09:18.474 --- 10.0.0.3 ping statistics --- 00:09:18.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.474 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:09:18.474 05:49:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:18.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:18.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:18.733 00:09:18.733 --- 10.0.0.1 ping statistics --- 00:09:18.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.733 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:18.733 05:49:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:18.733 05:49:40 -- nvmf/common.sh@421 -- # return 0 00:09:18.733 05:49:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:18.733 05:49:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:18.733 05:49:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:18.733 05:49:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:18.733 05:49:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:18.733 05:49:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:18.733 05:49:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:18.733 05:49:40 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:18.733 05:49:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:18.733 05:49:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:18.733 05:49:40 -- common/autotest_common.sh@10 -- # set +x 00:09:18.733 05:49:40 -- nvmf/common.sh@469 -- # nvmfpid=73477 00:09:18.733 05:49:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:18.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.733 05:49:40 -- nvmf/common.sh@470 -- # waitforlisten 73477 00:09:18.733 05:49:40 -- common/autotest_common.sh@829 -- # '[' -z 73477 ']' 00:09:18.733 05:49:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.733 05:49:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:18.733 05:49:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.733 05:49:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:18.733 05:49:40 -- common/autotest_common.sh@10 -- # set +x 00:09:18.733 [2024-12-15 05:49:40.180763] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:18.734 [2024-12-15 05:49:40.180844] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.734 [2024-12-15 05:49:40.312977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.734 [2024-12-15 05:49:40.344073] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:18.734 [2024-12-15 05:49:40.344462] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:18.734 [2024-12-15 05:49:40.344514] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:18.734 [2024-12-15 05:49:40.344643] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:18.734 [2024-12-15 05:49:40.344702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.670 05:49:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:19.670 05:49:41 -- common/autotest_common.sh@862 -- # return 0 00:09:19.671 05:49:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:19.671 05:49:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:19.671 05:49:41 -- common/autotest_common.sh@10 -- # set +x 00:09:19.671 05:49:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:19.671 05:49:41 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:19.671 05:49:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.671 05:49:41 -- common/autotest_common.sh@10 -- # set +x 00:09:19.671 [2024-12-15 05:49:41.122931] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:19.671 05:49:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.671 05:49:41 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:19.671 05:49:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.671 05:49:41 -- common/autotest_common.sh@10 -- # set +x 00:09:19.671 Malloc0 00:09:19.671 05:49:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.671 05:49:41 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:19.671 05:49:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.671 05:49:41 -- common/autotest_common.sh@10 -- # set +x 00:09:19.671 05:49:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.671 05:49:41 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:19.671 05:49:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.671 05:49:41 -- common/autotest_common.sh@10 -- # set +x 00:09:19.671 05:49:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.671 05:49:41 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.671 05:49:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.671 05:49:41 -- common/autotest_common.sh@10 -- # set +x 00:09:19.671 [2024-12-15 05:49:41.176268] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:19.671 05:49:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.671 05:49:41 -- target/queue_depth.sh@30 -- # bdevperf_pid=73509 00:09:19.671 05:49:41 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:19.671 05:49:41 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:19.671 05:49:41 -- target/queue_depth.sh@33 -- # waitforlisten 73509 /var/tmp/bdevperf.sock 00:09:19.671 05:49:41 -- common/autotest_common.sh@829 -- # '[' -z 73509 ']' 00:09:19.671 05:49:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:19.671 05:49:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:19.671 05:49:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:19.671 05:49:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:19.671 05:49:41 -- common/autotest_common.sh@10 -- # set +x 00:09:19.671 [2024-12-15 05:49:41.227966] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:19.671 [2024-12-15 05:49:41.228476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73509 ] 00:09:19.930 [2024-12-15 05:49:41.370211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.930 [2024-12-15 05:49:41.409974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.864 05:49:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:20.864 05:49:42 -- common/autotest_common.sh@862 -- # return 0 00:09:20.864 05:49:42 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:20.864 05:49:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.864 05:49:42 -- common/autotest_common.sh@10 -- # set +x 00:09:20.864 NVMe0n1 00:09:20.864 05:49:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.865 05:49:42 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:20.865 Running I/O for 10 seconds... 00:09:30.840 00:09:30.840 Latency(us) 00:09:30.840 [2024-12-15T05:49:52.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.840 [2024-12-15T05:49:52.481Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:30.840 Verification LBA range: start 0x0 length 0x4000 00:09:30.840 NVMe0n1 : 10.06 15141.17 59.15 0.00 0.00 67380.41 13941.29 65774.31 00:09:30.840 [2024-12-15T05:49:52.481Z] =================================================================================================================== 00:09:30.840 [2024-12-15T05:49:52.481Z] Total : 15141.17 59.15 0.00 0.00 67380.41 13941.29 65774.31 00:09:30.840 0 00:09:30.840 05:49:52 -- target/queue_depth.sh@39 -- # killprocess 73509 00:09:30.840 05:49:52 -- common/autotest_common.sh@936 -- # '[' -z 73509 ']' 00:09:30.840 05:49:52 -- common/autotest_common.sh@940 -- # kill -0 73509 00:09:30.840 05:49:52 -- common/autotest_common.sh@941 -- # uname 00:09:30.840 05:49:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:30.840 05:49:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73509 00:09:31.099 killing process with pid 73509 00:09:31.099 Received shutdown signal, test time was about 10.000000 seconds 00:09:31.099 00:09:31.099 Latency(us) 00:09:31.099 [2024-12-15T05:49:52.740Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.099 [2024-12-15T05:49:52.740Z] =================================================================================================================== 00:09:31.099 [2024-12-15T05:49:52.740Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:31.099 05:49:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:31.099 05:49:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:31.099 05:49:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73509' 00:09:31.099 05:49:52 -- common/autotest_common.sh@955 -- # kill 73509 00:09:31.099 05:49:52 -- common/autotest_common.sh@960 -- # wait 73509 00:09:31.099 05:49:52 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:31.099 05:49:52 -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:31.099 05:49:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:31.099 05:49:52 -- nvmf/common.sh@116 -- # sync 00:09:31.099 05:49:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:31.099 05:49:52 -- nvmf/common.sh@119 -- # set +e 00:09:31.099 05:49:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:31.099 05:49:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:31.099 rmmod nvme_tcp 00:09:31.099 rmmod nvme_fabrics 00:09:31.099 rmmod nvme_keyring 00:09:31.099 05:49:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:31.099 05:49:52 -- nvmf/common.sh@123 -- # set -e 00:09:31.099 05:49:52 -- nvmf/common.sh@124 -- # return 0 00:09:31.099 05:49:52 -- nvmf/common.sh@477 -- # '[' -n 73477 ']' 00:09:31.099 05:49:52 -- nvmf/common.sh@478 -- # killprocess 73477 00:09:31.099 05:49:52 -- common/autotest_common.sh@936 -- # '[' -z 73477 ']' 00:09:31.099 05:49:52 -- common/autotest_common.sh@940 -- # kill -0 73477 00:09:31.099 05:49:52 -- common/autotest_common.sh@941 -- # uname 00:09:31.099 05:49:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:31.099 05:49:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73477 00:09:31.359 killing process with pid 73477 00:09:31.359 05:49:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:31.359 05:49:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:31.359 05:49:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73477' 00:09:31.359 05:49:52 -- common/autotest_common.sh@955 -- # kill 73477 00:09:31.359 05:49:52 -- common/autotest_common.sh@960 -- # wait 73477 00:09:31.359 05:49:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:31.359 05:49:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:31.359 05:49:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:31.359 05:49:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:31.359 05:49:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:31.359 05:49:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.359 05:49:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:31.359 05:49:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.359 05:49:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:31.359 00:09:31.359 real 0m13.358s 00:09:31.359 user 0m23.346s 00:09:31.359 sys 0m1.867s 00:09:31.359 05:49:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:31.359 05:49:52 -- common/autotest_common.sh@10 -- # set +x 00:09:31.359 ************************************ 00:09:31.359 END TEST nvmf_queue_depth 00:09:31.359 ************************************ 00:09:31.359 05:49:52 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:31.359 05:49:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:31.359 05:49:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:31.359 05:49:52 -- common/autotest_common.sh@10 -- # set +x 00:09:31.359 ************************************ 00:09:31.359 START TEST nvmf_multipath 00:09:31.359 ************************************ 00:09:31.359 05:49:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:31.619 * Looking for test storage... 00:09:31.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:31.619 05:49:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:31.619 05:49:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:31.619 05:49:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:31.619 05:49:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:31.619 05:49:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:31.619 05:49:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:31.619 05:49:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:31.619 05:49:53 -- scripts/common.sh@335 -- # IFS=.-: 00:09:31.619 05:49:53 -- scripts/common.sh@335 -- # read -ra ver1 00:09:31.619 05:49:53 -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.619 05:49:53 -- scripts/common.sh@336 -- # read -ra ver2 00:09:31.619 05:49:53 -- scripts/common.sh@337 -- # local 'op=<' 00:09:31.619 05:49:53 -- scripts/common.sh@339 -- # ver1_l=2 00:09:31.619 05:49:53 -- scripts/common.sh@340 -- # ver2_l=1 00:09:31.619 05:49:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:31.619 05:49:53 -- scripts/common.sh@343 -- # case "$op" in 00:09:31.619 05:49:53 -- scripts/common.sh@344 -- # : 1 00:09:31.619 05:49:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:31.619 05:49:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.619 05:49:53 -- scripts/common.sh@364 -- # decimal 1 00:09:31.619 05:49:53 -- scripts/common.sh@352 -- # local d=1 00:09:31.619 05:49:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.619 05:49:53 -- scripts/common.sh@354 -- # echo 1 00:09:31.619 05:49:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:31.619 05:49:53 -- scripts/common.sh@365 -- # decimal 2 00:09:31.619 05:49:53 -- scripts/common.sh@352 -- # local d=2 00:09:31.619 05:49:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.619 05:49:53 -- scripts/common.sh@354 -- # echo 2 00:09:31.619 05:49:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:31.619 05:49:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:31.619 05:49:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:31.619 05:49:53 -- scripts/common.sh@367 -- # return 0 00:09:31.619 05:49:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.619 05:49:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:31.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.619 --rc genhtml_branch_coverage=1 00:09:31.619 --rc genhtml_function_coverage=1 00:09:31.619 --rc genhtml_legend=1 00:09:31.619 --rc geninfo_all_blocks=1 00:09:31.619 --rc geninfo_unexecuted_blocks=1 00:09:31.619 00:09:31.619 ' 00:09:31.619 05:49:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:31.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.619 --rc genhtml_branch_coverage=1 00:09:31.619 --rc genhtml_function_coverage=1 00:09:31.619 --rc genhtml_legend=1 00:09:31.619 --rc geninfo_all_blocks=1 00:09:31.619 --rc geninfo_unexecuted_blocks=1 00:09:31.619 00:09:31.619 ' 00:09:31.619 05:49:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:31.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.619 --rc genhtml_branch_coverage=1 00:09:31.619 --rc genhtml_function_coverage=1 00:09:31.619 --rc genhtml_legend=1 00:09:31.619 --rc geninfo_all_blocks=1 00:09:31.619 --rc geninfo_unexecuted_blocks=1 00:09:31.619 00:09:31.619 ' 00:09:31.619 05:49:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:31.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.619 --rc genhtml_branch_coverage=1 00:09:31.619 --rc genhtml_function_coverage=1 00:09:31.619 --rc genhtml_legend=1 00:09:31.619 --rc geninfo_all_blocks=1 00:09:31.619 --rc geninfo_unexecuted_blocks=1 00:09:31.619 00:09:31.619 ' 00:09:31.619 05:49:53 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:31.619 05:49:53 -- nvmf/common.sh@7 -- # uname -s 00:09:31.619 05:49:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:31.619 05:49:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:31.619 05:49:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:31.619 05:49:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:31.619 05:49:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:31.619 05:49:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:31.619 05:49:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:31.619 05:49:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:31.619 05:49:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:31.619 05:49:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:31.619 05:49:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:09:31.619 05:49:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:09:31.619 05:49:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:31.619 05:49:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:31.619 05:49:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:31.619 05:49:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:31.619 05:49:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.619 05:49:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.619 05:49:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.619 05:49:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.619 05:49:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.619 05:49:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.619 05:49:53 -- paths/export.sh@5 -- # export PATH 00:09:31.619 05:49:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.619 05:49:53 -- nvmf/common.sh@46 -- # : 0 00:09:31.619 05:49:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:31.619 05:49:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:31.619 05:49:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:31.619 05:49:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:31.619 05:49:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:31.619 05:49:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:31.619 05:49:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:31.619 05:49:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:31.619 05:49:53 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:31.619 05:49:53 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:31.619 05:49:53 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:31.619 05:49:53 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:31.619 05:49:53 -- target/multipath.sh@43 -- # nvmftestinit 00:09:31.619 05:49:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:31.619 05:49:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:31.619 05:49:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:31.619 05:49:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:31.619 05:49:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:31.619 05:49:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.619 05:49:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:31.619 05:49:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.619 05:49:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:31.619 05:49:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:31.620 05:49:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:31.620 05:49:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:31.620 05:49:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:31.620 05:49:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:31.620 05:49:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.620 05:49:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.620 05:49:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:31.620 05:49:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:31.620 05:49:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:31.620 05:49:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:31.620 05:49:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:31.620 05:49:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.620 05:49:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:31.620 05:49:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:31.620 05:49:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:31.620 05:49:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:31.620 05:49:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:31.620 05:49:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:31.620 Cannot find device "nvmf_tgt_br" 00:09:31.620 05:49:53 -- nvmf/common.sh@154 -- # true 00:09:31.620 05:49:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:31.620 Cannot find device "nvmf_tgt_br2" 00:09:31.620 05:49:53 -- nvmf/common.sh@155 -- # true 00:09:31.620 05:49:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:31.620 05:49:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:31.620 Cannot find device "nvmf_tgt_br" 00:09:31.620 05:49:53 -- nvmf/common.sh@157 -- # true 00:09:31.620 05:49:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:31.620 Cannot find device "nvmf_tgt_br2" 00:09:31.620 05:49:53 -- nvmf/common.sh@158 -- # true 00:09:31.620 05:49:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:31.878 05:49:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:31.878 05:49:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:31.878 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:31.878 05:49:53 -- nvmf/common.sh@161 -- # true 00:09:31.878 05:49:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:31.878 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:31.878 05:49:53 -- nvmf/common.sh@162 -- # true 00:09:31.878 05:49:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:31.878 05:49:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:31.878 05:49:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:31.878 05:49:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:31.878 05:49:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:31.878 05:49:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:31.878 05:49:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:31.878 05:49:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:31.878 05:49:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:31.878 05:49:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:31.878 05:49:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:31.878 05:49:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:31.878 05:49:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:31.878 05:49:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:31.878 05:49:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:31.878 05:49:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:31.878 05:49:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:31.878 05:49:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:31.878 05:49:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:31.878 05:49:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:31.878 05:49:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:31.878 05:49:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:31.878 05:49:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:31.878 05:49:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:31.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:09:31.878 00:09:31.878 --- 10.0.0.2 ping statistics --- 00:09:31.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.878 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:09:31.878 05:49:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:31.878 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:31.878 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:09:31.878 00:09:31.878 --- 10.0.0.3 ping statistics --- 00:09:31.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.878 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:09:31.878 05:49:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:31.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:09:31.878 00:09:31.878 --- 10.0.0.1 ping statistics --- 00:09:31.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.878 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:09:31.878 05:49:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.878 05:49:53 -- nvmf/common.sh@421 -- # return 0 00:09:31.878 05:49:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:31.878 05:49:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.878 05:49:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:31.878 05:49:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:31.878 05:49:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.878 05:49:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:31.878 05:49:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:31.878 05:49:53 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:09:31.878 05:49:53 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:31.878 05:49:53 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:31.878 05:49:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:31.878 05:49:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:31.878 05:49:53 -- common/autotest_common.sh@10 -- # set +x 00:09:32.137 05:49:53 -- nvmf/common.sh@469 -- # nvmfpid=73832 00:09:32.137 05:49:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:32.137 05:49:53 -- nvmf/common.sh@470 -- # waitforlisten 73832 00:09:32.137 05:49:53 -- common/autotest_common.sh@829 -- # '[' -z 73832 ']' 00:09:32.137 05:49:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.137 05:49:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:32.137 05:49:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.137 05:49:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:32.137 05:49:53 -- common/autotest_common.sh@10 -- # set +x 00:09:32.137 [2024-12-15 05:49:53.567686] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:32.137 [2024-12-15 05:49:53.567801] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.137 [2024-12-15 05:49:53.706773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:32.137 [2024-12-15 05:49:53.740325] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:32.137 [2024-12-15 05:49:53.740515] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.137 [2024-12-15 05:49:53.740528] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.137 [2024-12-15 05:49:53.740536] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.137 [2024-12-15 05:49:53.740685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.137 [2024-12-15 05:49:53.741341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.137 [2024-12-15 05:49:53.741535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.137 [2024-12-15 05:49:53.741609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.396 05:49:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:32.396 05:49:53 -- common/autotest_common.sh@862 -- # return 0 00:09:32.396 05:49:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:32.396 05:49:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:32.396 05:49:53 -- common/autotest_common.sh@10 -- # set +x 00:09:32.396 05:49:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.396 05:49:53 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:32.655 [2024-12-15 05:49:54.124009] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.655 05:49:54 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:32.913 Malloc0 00:09:32.913 05:49:54 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:33.171 05:49:54 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:33.429 05:49:54 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.687 [2024-12-15 05:49:55.192836] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.687 05:49:55 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:33.945 [2024-12-15 05:49:55.425072] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:33.945 05:49:55 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 --hostid=926ec8f8-6baf-4857-8f2f-72d8639146f9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:09:34.203 05:49:55 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 --hostid=926ec8f8-6baf-4857-8f2f-72d8639146f9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:34.203 05:49:55 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:34.203 05:49:55 -- common/autotest_common.sh@1187 -- # local i=0 00:09:34.203 05:49:55 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:09:34.203 05:49:55 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:09:34.203 05:49:55 -- common/autotest_common.sh@1194 -- # sleep 2 00:09:36.106 05:49:57 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:09:36.106 05:49:57 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:09:36.106 05:49:57 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:09:36.106 05:49:57 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:09:36.106 05:49:57 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:09:36.106 05:49:57 -- common/autotest_common.sh@1197 -- # return 0 00:09:36.365 05:49:57 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:36.365 05:49:57 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:36.365 05:49:57 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:36.365 05:49:57 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:36.365 05:49:57 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:36.365 05:49:57 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:36.365 05:49:57 -- target/multipath.sh@38 -- # return 0 00:09:36.365 05:49:57 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:36.365 05:49:57 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:36.365 05:49:57 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:36.365 05:49:57 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:36.365 05:49:57 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:36.365 05:49:57 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:36.365 05:49:57 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:36.365 05:49:57 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:36.365 05:49:57 -- target/multipath.sh@22 -- # local timeout=20 00:09:36.365 05:49:57 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:36.365 05:49:57 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:36.365 05:49:57 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:36.365 05:49:57 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:36.365 05:49:57 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:36.365 05:49:57 -- target/multipath.sh@22 -- # local timeout=20 00:09:36.365 05:49:57 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:36.365 05:49:57 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:36.365 05:49:57 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:36.365 05:49:57 -- target/multipath.sh@85 -- # echo numa 00:09:36.365 05:49:57 -- target/multipath.sh@88 -- # fio_pid=73914 00:09:36.365 05:49:57 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:36.365 05:49:57 -- target/multipath.sh@90 -- # sleep 1 00:09:36.365 [global] 00:09:36.365 thread=1 00:09:36.365 invalidate=1 00:09:36.365 rw=randrw 00:09:36.365 time_based=1 00:09:36.365 runtime=6 00:09:36.365 ioengine=libaio 00:09:36.365 direct=1 00:09:36.365 bs=4096 00:09:36.365 iodepth=128 00:09:36.365 norandommap=0 00:09:36.365 numjobs=1 00:09:36.365 00:09:36.366 verify_dump=1 00:09:36.366 verify_backlog=512 00:09:36.366 verify_state_save=0 00:09:36.366 do_verify=1 00:09:36.366 verify=crc32c-intel 00:09:36.366 [job0] 00:09:36.366 filename=/dev/nvme0n1 00:09:36.366 Could not set queue depth (nvme0n1) 00:09:36.366 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:36.366 fio-3.35 00:09:36.366 Starting 1 thread 00:09:37.302 05:49:58 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:37.560 05:49:59 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:37.819 05:49:59 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:37.819 05:49:59 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:37.819 05:49:59 -- target/multipath.sh@22 -- # local timeout=20 00:09:37.819 05:49:59 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:37.819 05:49:59 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:37.819 05:49:59 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:37.819 05:49:59 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:37.819 05:49:59 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:37.819 05:49:59 -- target/multipath.sh@22 -- # local timeout=20 00:09:37.819 05:49:59 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:37.819 05:49:59 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:37.819 05:49:59 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:37.819 05:49:59 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:38.078 05:49:59 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:38.336 05:49:59 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:38.336 05:49:59 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:38.336 05:49:59 -- target/multipath.sh@22 -- # local timeout=20 00:09:38.336 05:49:59 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:38.336 05:49:59 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:38.336 05:49:59 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:38.336 05:49:59 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:38.336 05:49:59 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:38.336 05:49:59 -- target/multipath.sh@22 -- # local timeout=20 00:09:38.337 05:49:59 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:38.337 05:49:59 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:38.337 05:49:59 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:38.337 05:49:59 -- target/multipath.sh@104 -- # wait 73914 00:09:42.520 00:09:42.520 job0: (groupid=0, jobs=1): err= 0: pid=73941: Sun Dec 15 05:50:04 2024 00:09:42.520 read: IOPS=10.8k, BW=42.4MiB/s (44.4MB/s)(254MiB/6006msec) 00:09:42.520 slat (usec): min=6, max=5407, avg=53.18, stdev=218.03 00:09:42.520 clat (usec): min=1240, max=13944, avg=7911.87, stdev=1373.12 00:09:42.520 lat (usec): min=1259, max=13954, avg=7965.05, stdev=1378.28 00:09:42.520 clat percentiles (usec): 00:09:42.520 | 1.00th=[ 4228], 5.00th=[ 5997], 10.00th=[ 6718], 20.00th=[ 7177], 00:09:42.520 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 8029], 00:09:42.520 | 70.00th=[ 8225], 80.00th=[ 8586], 90.00th=[ 9110], 95.00th=[11076], 00:09:42.520 | 99.00th=[12256], 99.50th=[12518], 99.90th=[12911], 99.95th=[13042], 00:09:42.520 | 99.99th=[13698] 00:09:42.520 bw ( KiB/s): min= 8040, max=29712, per=53.36%, avg=23147.09, stdev=6866.72, samples=11 00:09:42.520 iops : min= 2010, max= 7428, avg=5786.73, stdev=1716.65, samples=11 00:09:42.520 write: IOPS=6516, BW=25.5MiB/s (26.7MB/s)(138MiB/5413msec); 0 zone resets 00:09:42.520 slat (usec): min=15, max=3887, avg=62.44, stdev=150.28 00:09:42.520 clat (usec): min=2366, max=13221, avg=6989.43, stdev=1182.00 00:09:42.520 lat (usec): min=2402, max=13244, avg=7051.87, stdev=1186.30 00:09:42.520 clat percentiles (usec): 00:09:42.520 | 1.00th=[ 3261], 5.00th=[ 4359], 10.00th=[ 5604], 20.00th=[ 6521], 00:09:42.520 | 30.00th=[ 6783], 40.00th=[ 6980], 50.00th=[ 7177], 60.00th=[ 7308], 00:09:42.520 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 7963], 95.00th=[ 8225], 00:09:42.520 | 99.00th=[10683], 99.50th=[11338], 99.90th=[12387], 99.95th=[12780], 00:09:42.520 | 99.99th=[13173] 00:09:42.520 bw ( KiB/s): min= 8112, max=29264, per=88.86%, avg=23162.91, stdev=6702.08, samples=11 00:09:42.520 iops : min= 2028, max= 7316, avg=5790.73, stdev=1675.52, samples=11 00:09:42.520 lat (msec) : 2=0.04%, 4=1.74%, 10=93.22%, 20=5.00% 00:09:42.520 cpu : usr=6.19%, sys=22.51%, ctx=5734, majf=0, minf=102 00:09:42.520 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:42.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:42.520 issued rwts: total=65130,35272,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.520 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:42.520 00:09:42.520 Run status group 0 (all jobs): 00:09:42.520 READ: bw=42.4MiB/s (44.4MB/s), 42.4MiB/s-42.4MiB/s (44.4MB/s-44.4MB/s), io=254MiB (267MB), run=6006-6006msec 00:09:42.520 WRITE: bw=25.5MiB/s (26.7MB/s), 25.5MiB/s-25.5MiB/s (26.7MB/s-26.7MB/s), io=138MiB (144MB), run=5413-5413msec 00:09:42.520 00:09:42.520 Disk stats (read/write): 00:09:42.520 nvme0n1: ios=64371/34426, merge=0/0, ticks=487917/225331, in_queue=713248, util=98.66% 00:09:42.520 05:50:04 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:09:42.778 05:50:04 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:43.036 05:50:04 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:43.036 05:50:04 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:43.036 05:50:04 -- target/multipath.sh@22 -- # local timeout=20 00:09:43.036 05:50:04 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:43.036 05:50:04 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:43.036 05:50:04 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:43.036 05:50:04 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:43.036 05:50:04 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:43.036 05:50:04 -- target/multipath.sh@22 -- # local timeout=20 00:09:43.036 05:50:04 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:43.036 05:50:04 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:43.036 05:50:04 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:43.036 05:50:04 -- target/multipath.sh@113 -- # echo round-robin 00:09:43.036 05:50:04 -- target/multipath.sh@116 -- # fio_pid=74016 00:09:43.036 05:50:04 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:43.036 05:50:04 -- target/multipath.sh@118 -- # sleep 1 00:09:43.294 [global] 00:09:43.294 thread=1 00:09:43.294 invalidate=1 00:09:43.294 rw=randrw 00:09:43.294 time_based=1 00:09:43.294 runtime=6 00:09:43.294 ioengine=libaio 00:09:43.294 direct=1 00:09:43.294 bs=4096 00:09:43.294 iodepth=128 00:09:43.294 norandommap=0 00:09:43.294 numjobs=1 00:09:43.294 00:09:43.294 verify_dump=1 00:09:43.294 verify_backlog=512 00:09:43.294 verify_state_save=0 00:09:43.294 do_verify=1 00:09:43.294 verify=crc32c-intel 00:09:43.294 [job0] 00:09:43.294 filename=/dev/nvme0n1 00:09:43.294 Could not set queue depth (nvme0n1) 00:09:43.294 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:43.294 fio-3.35 00:09:43.294 Starting 1 thread 00:09:44.230 05:50:05 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:44.535 05:50:05 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:44.797 05:50:06 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:44.797 05:50:06 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:44.797 05:50:06 -- target/multipath.sh@22 -- # local timeout=20 00:09:44.797 05:50:06 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:44.797 05:50:06 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:44.797 05:50:06 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:44.797 05:50:06 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:44.797 05:50:06 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:44.797 05:50:06 -- target/multipath.sh@22 -- # local timeout=20 00:09:44.797 05:50:06 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:44.797 05:50:06 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:44.797 05:50:06 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:44.797 05:50:06 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:45.056 05:50:06 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:45.314 05:50:06 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:45.314 05:50:06 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:45.314 05:50:06 -- target/multipath.sh@22 -- # local timeout=20 00:09:45.314 05:50:06 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:45.314 05:50:06 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:45.314 05:50:06 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:45.314 05:50:06 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:45.314 05:50:06 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:45.314 05:50:06 -- target/multipath.sh@22 -- # local timeout=20 00:09:45.314 05:50:06 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:45.314 05:50:06 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:45.314 05:50:06 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:45.314 05:50:06 -- target/multipath.sh@132 -- # wait 74016 00:09:49.499 00:09:49.499 job0: (groupid=0, jobs=1): err= 0: pid=74042: Sun Dec 15 05:50:10 2024 00:09:49.499 read: IOPS=12.0k, BW=47.0MiB/s (49.3MB/s)(282MiB/5997msec) 00:09:49.499 slat (usec): min=2, max=5555, avg=40.37, stdev=185.87 00:09:49.499 clat (usec): min=412, max=15239, avg=7208.90, stdev=1748.34 00:09:49.499 lat (usec): min=424, max=15256, avg=7249.27, stdev=1761.45 00:09:49.499 clat percentiles (usec): 00:09:49.499 | 1.00th=[ 3261], 5.00th=[ 4178], 10.00th=[ 4883], 20.00th=[ 5800], 00:09:49.499 | 30.00th=[ 6587], 40.00th=[ 7046], 50.00th=[ 7373], 60.00th=[ 7635], 00:09:49.499 | 70.00th=[ 7963], 80.00th=[ 8291], 90.00th=[ 8848], 95.00th=[10552], 00:09:49.499 | 99.00th=[12125], 99.50th=[12649], 99.90th=[13304], 99.95th=[13435], 00:09:49.499 | 99.99th=[15270] 00:09:49.499 bw ( KiB/s): min= 4448, max=43144, per=53.46%, avg=25721.18, stdev=10648.47, samples=11 00:09:49.499 iops : min= 1112, max=10786, avg=6430.27, stdev=2662.08, samples=11 00:09:49.499 write: IOPS=7307, BW=28.5MiB/s (29.9MB/s)(152MiB/5324msec); 0 zone resets 00:09:49.499 slat (usec): min=4, max=3918, avg=53.78, stdev=131.37 00:09:49.499 clat (usec): min=334, max=13311, avg=6195.79, stdev=1641.60 00:09:49.499 lat (usec): min=367, max=13335, avg=6249.56, stdev=1654.96 00:09:49.499 clat percentiles (usec): 00:09:49.499 | 1.00th=[ 2671], 5.00th=[ 3294], 10.00th=[ 3720], 20.00th=[ 4424], 00:09:49.499 | 30.00th=[ 5276], 40.00th=[ 6325], 50.00th=[ 6718], 60.00th=[ 6980], 00:09:49.499 | 70.00th=[ 7242], 80.00th=[ 7504], 90.00th=[ 7832], 95.00th=[ 8160], 00:09:49.499 | 99.00th=[10028], 99.50th=[10814], 99.90th=[12125], 99.95th=[12518], 00:09:49.499 | 99.99th=[13173] 00:09:49.499 bw ( KiB/s): min= 4880, max=42536, per=87.96%, avg=25710.18, stdev=10506.55, samples=11 00:09:49.499 iops : min= 1220, max=10634, avg=6427.55, stdev=2626.64, samples=11 00:09:49.499 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.03% 00:09:49.499 lat (msec) : 2=0.10%, 4=7.19%, 10=88.50%, 20=4.15% 00:09:49.499 cpu : usr=6.38%, sys=24.35%, ctx=6083, majf=0, minf=163 00:09:49.499 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:09:49.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.499 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.499 issued rwts: total=72127,38903,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.499 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.499 00:09:49.499 Run status group 0 (all jobs): 00:09:49.499 READ: bw=47.0MiB/s (49.3MB/s), 47.0MiB/s-47.0MiB/s (49.3MB/s-49.3MB/s), io=282MiB (295MB), run=5997-5997msec 00:09:49.499 WRITE: bw=28.5MiB/s (29.9MB/s), 28.5MiB/s-28.5MiB/s (29.9MB/s-29.9MB/s), io=152MiB (159MB), run=5324-5324msec 00:09:49.499 00:09:49.499 Disk stats (read/write): 00:09:49.499 nvme0n1: ios=71248/38195, merge=0/0, ticks=483768/219130, in_queue=702898, util=98.58% 00:09:49.499 05:50:10 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:49.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:49.499 05:50:11 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:49.499 05:50:11 -- common/autotest_common.sh@1208 -- # local i=0 00:09:49.499 05:50:11 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:49.499 05:50:11 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:09:49.499 05:50:11 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:09:49.499 05:50:11 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:49.499 05:50:11 -- common/autotest_common.sh@1220 -- # return 0 00:09:49.499 05:50:11 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:50.065 05:50:11 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:50.065 05:50:11 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:50.065 05:50:11 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:50.065 05:50:11 -- target/multipath.sh@144 -- # nvmftestfini 00:09:50.065 05:50:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:50.065 05:50:11 -- nvmf/common.sh@116 -- # sync 00:09:50.065 05:50:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:50.065 05:50:11 -- nvmf/common.sh@119 -- # set +e 00:09:50.065 05:50:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:50.065 05:50:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:50.065 rmmod nvme_tcp 00:09:50.065 rmmod nvme_fabrics 00:09:50.065 rmmod nvme_keyring 00:09:50.065 05:50:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:50.065 05:50:11 -- nvmf/common.sh@123 -- # set -e 00:09:50.065 05:50:11 -- nvmf/common.sh@124 -- # return 0 00:09:50.065 05:50:11 -- nvmf/common.sh@477 -- # '[' -n 73832 ']' 00:09:50.065 05:50:11 -- nvmf/common.sh@478 -- # killprocess 73832 00:09:50.065 05:50:11 -- common/autotest_common.sh@936 -- # '[' -z 73832 ']' 00:09:50.065 05:50:11 -- common/autotest_common.sh@940 -- # kill -0 73832 00:09:50.065 05:50:11 -- common/autotest_common.sh@941 -- # uname 00:09:50.065 05:50:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:50.065 05:50:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73832 00:09:50.065 05:50:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:50.065 05:50:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:50.065 killing process with pid 73832 00:09:50.065 05:50:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73832' 00:09:50.065 05:50:11 -- common/autotest_common.sh@955 -- # kill 73832 00:09:50.065 05:50:11 -- common/autotest_common.sh@960 -- # wait 73832 00:09:50.324 05:50:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:50.324 05:50:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:50.324 05:50:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:50.324 05:50:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:50.324 05:50:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:50.324 05:50:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.325 05:50:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:50.325 05:50:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.325 05:50:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:50.325 00:09:50.325 real 0m18.752s 00:09:50.325 user 1m10.155s 00:09:50.325 sys 0m10.031s 00:09:50.325 05:50:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:50.325 05:50:11 -- common/autotest_common.sh@10 -- # set +x 00:09:50.325 ************************************ 00:09:50.325 END TEST nvmf_multipath 00:09:50.325 ************************************ 00:09:50.325 05:50:11 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:50.325 05:50:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:50.325 05:50:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:50.325 05:50:11 -- common/autotest_common.sh@10 -- # set +x 00:09:50.325 ************************************ 00:09:50.325 START TEST nvmf_zcopy 00:09:50.325 ************************************ 00:09:50.325 05:50:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:50.325 * Looking for test storage... 00:09:50.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:50.325 05:50:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:50.325 05:50:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:50.325 05:50:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:50.325 05:50:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:50.325 05:50:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:50.325 05:50:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:50.325 05:50:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:50.325 05:50:11 -- scripts/common.sh@335 -- # IFS=.-: 00:09:50.325 05:50:11 -- scripts/common.sh@335 -- # read -ra ver1 00:09:50.325 05:50:11 -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.325 05:50:11 -- scripts/common.sh@336 -- # read -ra ver2 00:09:50.325 05:50:11 -- scripts/common.sh@337 -- # local 'op=<' 00:09:50.325 05:50:11 -- scripts/common.sh@339 -- # ver1_l=2 00:09:50.325 05:50:11 -- scripts/common.sh@340 -- # ver2_l=1 00:09:50.325 05:50:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:50.325 05:50:11 -- scripts/common.sh@343 -- # case "$op" in 00:09:50.325 05:50:11 -- scripts/common.sh@344 -- # : 1 00:09:50.325 05:50:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:50.325 05:50:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.584 05:50:11 -- scripts/common.sh@364 -- # decimal 1 00:09:50.584 05:50:11 -- scripts/common.sh@352 -- # local d=1 00:09:50.584 05:50:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.584 05:50:11 -- scripts/common.sh@354 -- # echo 1 00:09:50.584 05:50:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:50.584 05:50:11 -- scripts/common.sh@365 -- # decimal 2 00:09:50.584 05:50:11 -- scripts/common.sh@352 -- # local d=2 00:09:50.584 05:50:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.584 05:50:11 -- scripts/common.sh@354 -- # echo 2 00:09:50.584 05:50:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:50.584 05:50:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:50.584 05:50:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:50.584 05:50:11 -- scripts/common.sh@367 -- # return 0 00:09:50.584 05:50:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.584 05:50:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:50.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.584 --rc genhtml_branch_coverage=1 00:09:50.584 --rc genhtml_function_coverage=1 00:09:50.584 --rc genhtml_legend=1 00:09:50.584 --rc geninfo_all_blocks=1 00:09:50.584 --rc geninfo_unexecuted_blocks=1 00:09:50.584 00:09:50.584 ' 00:09:50.584 05:50:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:50.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.584 --rc genhtml_branch_coverage=1 00:09:50.584 --rc genhtml_function_coverage=1 00:09:50.584 --rc genhtml_legend=1 00:09:50.584 --rc geninfo_all_blocks=1 00:09:50.584 --rc geninfo_unexecuted_blocks=1 00:09:50.584 00:09:50.584 ' 00:09:50.584 05:50:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:50.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.584 --rc genhtml_branch_coverage=1 00:09:50.584 --rc genhtml_function_coverage=1 00:09:50.584 --rc genhtml_legend=1 00:09:50.584 --rc geninfo_all_blocks=1 00:09:50.584 --rc geninfo_unexecuted_blocks=1 00:09:50.584 00:09:50.584 ' 00:09:50.584 05:50:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:50.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.584 --rc genhtml_branch_coverage=1 00:09:50.584 --rc genhtml_function_coverage=1 00:09:50.584 --rc genhtml_legend=1 00:09:50.584 --rc geninfo_all_blocks=1 00:09:50.584 --rc geninfo_unexecuted_blocks=1 00:09:50.584 00:09:50.584 ' 00:09:50.584 05:50:11 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:50.584 05:50:11 -- nvmf/common.sh@7 -- # uname -s 00:09:50.584 05:50:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.584 05:50:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.584 05:50:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.584 05:50:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.584 05:50:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.584 05:50:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.584 05:50:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.584 05:50:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.584 05:50:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.584 05:50:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.584 05:50:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:09:50.584 05:50:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:09:50.584 05:50:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.584 05:50:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.584 05:50:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:50.584 05:50:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:50.584 05:50:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.584 05:50:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.584 05:50:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.584 05:50:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.584 05:50:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.584 05:50:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.584 05:50:11 -- paths/export.sh@5 -- # export PATH 00:09:50.584 05:50:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.584 05:50:11 -- nvmf/common.sh@46 -- # : 0 00:09:50.584 05:50:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:50.584 05:50:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:50.584 05:50:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:50.584 05:50:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.584 05:50:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.584 05:50:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:50.584 05:50:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:50.585 05:50:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:50.585 05:50:11 -- target/zcopy.sh@12 -- # nvmftestinit 00:09:50.585 05:50:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:50.585 05:50:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.585 05:50:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:50.585 05:50:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:50.585 05:50:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:50.585 05:50:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.585 05:50:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:50.585 05:50:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.585 05:50:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:50.585 05:50:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:50.585 05:50:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:50.585 05:50:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:50.585 05:50:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:50.585 05:50:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:50.585 05:50:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:50.585 05:50:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:50.585 05:50:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:50.585 05:50:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:50.585 05:50:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:50.585 05:50:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:50.585 05:50:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:50.585 05:50:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:50.585 05:50:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:50.585 05:50:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:50.585 05:50:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:50.585 05:50:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:50.585 05:50:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:50.585 05:50:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:50.585 Cannot find device "nvmf_tgt_br" 00:09:50.585 05:50:12 -- nvmf/common.sh@154 -- # true 00:09:50.585 05:50:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:50.585 Cannot find device "nvmf_tgt_br2" 00:09:50.585 05:50:12 -- nvmf/common.sh@155 -- # true 00:09:50.585 05:50:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:50.585 05:50:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:50.585 Cannot find device "nvmf_tgt_br" 00:09:50.585 05:50:12 -- nvmf/common.sh@157 -- # true 00:09:50.585 05:50:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:50.585 Cannot find device "nvmf_tgt_br2" 00:09:50.585 05:50:12 -- nvmf/common.sh@158 -- # true 00:09:50.585 05:50:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:50.585 05:50:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:50.585 05:50:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:50.585 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:50.585 05:50:12 -- nvmf/common.sh@161 -- # true 00:09:50.585 05:50:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:50.585 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:50.585 05:50:12 -- nvmf/common.sh@162 -- # true 00:09:50.585 05:50:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:50.585 05:50:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:50.585 05:50:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:50.585 05:50:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:50.585 05:50:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:50.585 05:50:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:50.843 05:50:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:50.843 05:50:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:50.843 05:50:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:50.843 05:50:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:50.843 05:50:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:50.843 05:50:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:50.843 05:50:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:50.843 05:50:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:50.843 05:50:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:50.843 05:50:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:50.843 05:50:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:50.843 05:50:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:50.843 05:50:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:50.843 05:50:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:50.843 05:50:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:50.843 05:50:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:50.843 05:50:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:50.843 05:50:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:50.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:50.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:09:50.843 00:09:50.843 --- 10.0.0.2 ping statistics --- 00:09:50.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.843 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:50.843 05:50:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:50.844 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:50.844 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:09:50.844 00:09:50.844 --- 10.0.0.3 ping statistics --- 00:09:50.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.844 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:09:50.844 05:50:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:50.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:50.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:09:50.844 00:09:50.844 --- 10.0.0.1 ping statistics --- 00:09:50.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.844 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:50.844 05:50:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:50.844 05:50:12 -- nvmf/common.sh@421 -- # return 0 00:09:50.844 05:50:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:50.844 05:50:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:50.844 05:50:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:50.844 05:50:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:50.844 05:50:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:50.844 05:50:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:50.844 05:50:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:50.844 05:50:12 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:50.844 05:50:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:50.844 05:50:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:50.844 05:50:12 -- common/autotest_common.sh@10 -- # set +x 00:09:50.844 05:50:12 -- nvmf/common.sh@469 -- # nvmfpid=74305 00:09:50.844 05:50:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:50.844 05:50:12 -- nvmf/common.sh@470 -- # waitforlisten 74305 00:09:50.844 05:50:12 -- common/autotest_common.sh@829 -- # '[' -z 74305 ']' 00:09:50.844 05:50:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.844 05:50:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:50.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.844 05:50:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.844 05:50:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:50.844 05:50:12 -- common/autotest_common.sh@10 -- # set +x 00:09:50.844 [2024-12-15 05:50:12.438783] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:50.844 [2024-12-15 05:50:12.438928] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.102 [2024-12-15 05:50:12.578019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.102 [2024-12-15 05:50:12.611804] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:51.102 [2024-12-15 05:50:12.611984] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.102 [2024-12-15 05:50:12.611998] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.102 [2024-12-15 05:50:12.612006] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.102 [2024-12-15 05:50:12.612033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.037 05:50:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:52.037 05:50:13 -- common/autotest_common.sh@862 -- # return 0 00:09:52.037 05:50:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:52.037 05:50:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:52.037 05:50:13 -- common/autotest_common.sh@10 -- # set +x 00:09:52.037 05:50:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:52.037 05:50:13 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:52.037 05:50:13 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:52.037 05:50:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.037 05:50:13 -- common/autotest_common.sh@10 -- # set +x 00:09:52.037 [2024-12-15 05:50:13.434479] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:52.037 05:50:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.037 05:50:13 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:52.037 05:50:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.037 05:50:13 -- common/autotest_common.sh@10 -- # set +x 00:09:52.037 05:50:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.037 05:50:13 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:52.037 05:50:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.037 05:50:13 -- common/autotest_common.sh@10 -- # set +x 00:09:52.037 [2024-12-15 05:50:13.454565] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:52.037 05:50:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.037 05:50:13 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:52.037 05:50:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.037 05:50:13 -- common/autotest_common.sh@10 -- # set +x 00:09:52.037 05:50:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.037 05:50:13 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:52.037 05:50:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.037 05:50:13 -- common/autotest_common.sh@10 -- # set +x 00:09:52.037 malloc0 00:09:52.037 05:50:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.037 05:50:13 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:52.037 05:50:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.037 05:50:13 -- common/autotest_common.sh@10 -- # set +x 00:09:52.037 05:50:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.037 05:50:13 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:52.037 05:50:13 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:52.037 05:50:13 -- nvmf/common.sh@520 -- # config=() 00:09:52.037 05:50:13 -- nvmf/common.sh@520 -- # local subsystem config 00:09:52.037 05:50:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:52.037 05:50:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:52.037 { 00:09:52.037 "params": { 00:09:52.037 "name": "Nvme$subsystem", 00:09:52.037 "trtype": "$TEST_TRANSPORT", 00:09:52.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:52.037 "adrfam": "ipv4", 00:09:52.037 "trsvcid": "$NVMF_PORT", 00:09:52.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:52.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:52.037 "hdgst": ${hdgst:-false}, 00:09:52.037 "ddgst": ${ddgst:-false} 00:09:52.037 }, 00:09:52.037 "method": "bdev_nvme_attach_controller" 00:09:52.037 } 00:09:52.037 EOF 00:09:52.037 )") 00:09:52.037 05:50:13 -- nvmf/common.sh@542 -- # cat 00:09:52.037 05:50:13 -- nvmf/common.sh@544 -- # jq . 00:09:52.037 05:50:13 -- nvmf/common.sh@545 -- # IFS=, 00:09:52.037 05:50:13 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:52.037 "params": { 00:09:52.037 "name": "Nvme1", 00:09:52.037 "trtype": "tcp", 00:09:52.037 "traddr": "10.0.0.2", 00:09:52.037 "adrfam": "ipv4", 00:09:52.037 "trsvcid": "4420", 00:09:52.037 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:52.037 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:52.037 "hdgst": false, 00:09:52.037 "ddgst": false 00:09:52.037 }, 00:09:52.037 "method": "bdev_nvme_attach_controller" 00:09:52.037 }' 00:09:52.038 [2024-12-15 05:50:13.534561] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:52.038 [2024-12-15 05:50:13.534655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74338 ] 00:09:52.038 [2024-12-15 05:50:13.675332] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.296 [2024-12-15 05:50:13.714626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.296 Running I/O for 10 seconds... 00:10:02.309 00:10:02.309 Latency(us) 00:10:02.309 [2024-12-15T05:50:23.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:02.309 [2024-12-15T05:50:23.950Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:02.309 Verification LBA range: start 0x0 length 0x1000 00:10:02.309 Nvme1n1 : 10.01 9741.42 76.10 0.00 0.00 13104.87 1221.35 23354.65 00:10:02.309 [2024-12-15T05:50:23.950Z] =================================================================================================================== 00:10:02.309 [2024-12-15T05:50:23.950Z] Total : 9741.42 76.10 0.00 0.00 13104.87 1221.35 23354.65 00:10:02.568 05:50:24 -- target/zcopy.sh@39 -- # perfpid=74450 00:10:02.568 05:50:24 -- target/zcopy.sh@41 -- # xtrace_disable 00:10:02.568 05:50:24 -- common/autotest_common.sh@10 -- # set +x 00:10:02.568 05:50:24 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:02.568 05:50:24 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:02.568 05:50:24 -- nvmf/common.sh@520 -- # config=() 00:10:02.568 05:50:24 -- nvmf/common.sh@520 -- # local subsystem config 00:10:02.568 05:50:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:02.568 05:50:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:02.568 { 00:10:02.568 "params": { 00:10:02.568 "name": "Nvme$subsystem", 00:10:02.568 "trtype": "$TEST_TRANSPORT", 00:10:02.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:02.568 "adrfam": "ipv4", 00:10:02.568 "trsvcid": "$NVMF_PORT", 00:10:02.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:02.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:02.568 "hdgst": ${hdgst:-false}, 00:10:02.568 "ddgst": ${ddgst:-false} 00:10:02.568 }, 00:10:02.568 "method": "bdev_nvme_attach_controller" 00:10:02.568 } 00:10:02.568 EOF 00:10:02.568 )") 00:10:02.568 05:50:24 -- nvmf/common.sh@542 -- # cat 00:10:02.568 [2024-12-15 05:50:24.010590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.568 [2024-12-15 05:50:24.010652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.568 05:50:24 -- nvmf/common.sh@544 -- # jq . 00:10:02.568 05:50:24 -- nvmf/common.sh@545 -- # IFS=, 00:10:02.568 05:50:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:02.568 "params": { 00:10:02.568 "name": "Nvme1", 00:10:02.568 "trtype": "tcp", 00:10:02.569 "traddr": "10.0.0.2", 00:10:02.569 "adrfam": "ipv4", 00:10:02.569 "trsvcid": "4420", 00:10:02.569 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:02.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:02.569 "hdgst": false, 00:10:02.569 "ddgst": false 00:10:02.569 }, 00:10:02.569 "method": "bdev_nvme_attach_controller" 00:10:02.569 }' 00:10:02.569 [2024-12-15 05:50:24.018555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.569 [2024-12-15 05:50:24.018597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.569 [2024-12-15 05:50:24.030564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.569 [2024-12-15 05:50:24.030610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.569 [2024-12-15 05:50:24.042603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.569 [2024-12-15 05:50:24.042641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.569 [2024-12-15 05:50:24.054561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.569 [2024-12-15 05:50:24.054601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.569 [2024-12-15 05:50:24.060394] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:02.569 [2024-12-15 05:50:24.060496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74450 ] 00:10:02.569 [2024-12-15 05:50:24.066589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.569 [2024-12-15 05:50:24.066634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.569 [2024-12-15 05:50:24.078560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.569 [2024-12-15 05:50:24.078582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.569 [2024-12-15 05:50:24.090562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.569 [2024-12-15 05:50:24.090583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.569 [2024-12-15 05:50:24.102568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.569 [2024-12-15 05:50:24.102590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.569 [2024-12-15 05:50:24.114570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.569 [2024-12-15 05:50:24.114591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.569 [2024-12-15 05:50:24.126572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.569 [2024-12-15 05:50:24.126593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.569 [2024-12-15 05:50:24.138577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.569 [2024-12-15 05:50:24.138599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.569 [2024-12-15 05:50:24.150596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.569 [2024-12-15 05:50:24.150636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.569 [2024-12-15 05:50:24.162598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.569 [2024-12-15 05:50:24.162622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.569 [2024-12-15 05:50:24.174596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.569 [2024-12-15 05:50:24.174620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.569 [2024-12-15 05:50:24.186642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.569 [2024-12-15 05:50:24.186704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.569 [2024-12-15 05:50:24.198647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.569 [2024-12-15 05:50:24.198681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.569 [2024-12-15 05:50:24.199721] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.569 [2024-12-15 05:50:24.206642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.569 [2024-12-15 05:50:24.206694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.828 [2024-12-15 05:50:24.214637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.828 [2024-12-15 05:50:24.214690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.828 [2024-12-15 05:50:24.226657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.828 [2024-12-15 05:50:24.226714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.828 [2024-12-15 05:50:24.234648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.828 [2024-12-15 05:50:24.234694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.828 [2024-12-15 05:50:24.235532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.828 [2024-12-15 05:50:24.242639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.828 [2024-12-15 05:50:24.242690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.828 [2024-12-15 05:50:24.254662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.828 [2024-12-15 05:50:24.254721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.828 [2024-12-15 05:50:24.262654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.828 [2024-12-15 05:50:24.262710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.828 [2024-12-15 05:50:24.274663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.828 [2024-12-15 05:50:24.274702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.828 [2024-12-15 05:50:24.282642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.828 [2024-12-15 05:50:24.282690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.828 [2024-12-15 05:50:24.290666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.828 [2024-12-15 05:50:24.290718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.828 [2024-12-15 05:50:24.298664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.828 [2024-12-15 05:50:24.298721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.828 [2024-12-15 05:50:24.306661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.828 [2024-12-15 05:50:24.306710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.828 [2024-12-15 05:50:24.318695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.828 [2024-12-15 05:50:24.318753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.828 [2024-12-15 05:50:24.326664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.828 [2024-12-15 05:50:24.326707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.828 [2024-12-15 05:50:24.338718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.828 [2024-12-15 05:50:24.338779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.828 [2024-12-15 05:50:24.346703] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.828 [2024-12-15 05:50:24.346755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.828 [2024-12-15 05:50:24.354724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.828 [2024-12-15 05:50:24.354781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.828 [2024-12-15 05:50:24.362708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.828 [2024-12-15 05:50:24.362751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.828 Running I/O for 5 seconds... 00:10:02.828 [2024-12-15 05:50:24.370693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.828 [2024-12-15 05:50:24.370716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.828 [2024-12-15 05:50:24.384671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.828 [2024-12-15 05:50:24.384711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.828 [2024-12-15 05:50:24.395501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.828 [2024-12-15 05:50:24.395572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.828 [2024-12-15 05:50:24.408334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.828 [2024-12-15 05:50:24.408382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.829 [2024-12-15 05:50:24.424724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.829 [2024-12-15 05:50:24.424797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.829 [2024-12-15 05:50:24.443276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.829 [2024-12-15 05:50:24.443309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.829 [2024-12-15 05:50:24.453766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.829 [2024-12-15 05:50:24.453811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.088 [2024-12-15 05:50:24.471970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.088 [2024-12-15 05:50:24.472026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.088 [2024-12-15 05:50:24.486370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.088 [2024-12-15 05:50:24.486416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.088 [2024-12-15 05:50:24.495403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.088 [2024-12-15 05:50:24.495435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.088 [2024-12-15 05:50:24.511875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.088 [2024-12-15 05:50:24.511972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.088 [2024-12-15 05:50:24.529027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.088 [2024-12-15 05:50:24.529075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.088 [2024-12-15 05:50:24.539128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.088 [2024-12-15 05:50:24.539160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.088 [2024-12-15 05:50:24.550977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.088 [2024-12-15 05:50:24.551015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.088 [2024-12-15 05:50:24.563058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.088 [2024-12-15 05:50:24.563103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.088 [2024-12-15 05:50:24.574521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.088 [2024-12-15 05:50:24.574567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.088 [2024-12-15 05:50:24.587228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.088 [2024-12-15 05:50:24.587289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.088 [2024-12-15 05:50:24.596989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.088 [2024-12-15 05:50:24.597066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.088 [2024-12-15 05:50:24.611705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.088 [2024-12-15 05:50:24.611733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.088 [2024-12-15 05:50:24.622282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.088 [2024-12-15 05:50:24.622313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.088 [2024-12-15 05:50:24.633835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.088 [2024-12-15 05:50:24.633880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.088 [2024-12-15 05:50:24.649680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.088 [2024-12-15 05:50:24.649725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.088 [2024-12-15 05:50:24.667064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.088 [2024-12-15 05:50:24.667110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.088 [2024-12-15 05:50:24.677515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.088 [2024-12-15 05:50:24.677560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.088 [2024-12-15 05:50:24.688306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.088 [2024-12-15 05:50:24.688351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.088 [2024-12-15 05:50:24.700264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.088 [2024-12-15 05:50:24.700309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.088 [2024-12-15 05:50:24.708592] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.088 [2024-12-15 05:50:24.708635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.088 [2024-12-15 05:50:24.720944] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.088 [2024-12-15 05:50:24.720987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.347 [2024-12-15 05:50:24.731750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.347 [2024-12-15 05:50:24.731780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.347 [2024-12-15 05:50:24.747220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.347 [2024-12-15 05:50:24.747260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.347 [2024-12-15 05:50:24.763763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.347 [2024-12-15 05:50:24.763807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.347 [2024-12-15 05:50:24.773295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.347 [2024-12-15 05:50:24.773327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.347 [2024-12-15 05:50:24.784034] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.347 [2024-12-15 05:50:24.784063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.347 [2024-12-15 05:50:24.796140] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.347 [2024-12-15 05:50:24.796184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.347 [2024-12-15 05:50:24.804973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.347 [2024-12-15 05:50:24.805016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.347 [2024-12-15 05:50:24.817575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.347 [2024-12-15 05:50:24.817635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.347 [2024-12-15 05:50:24.828994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.347 [2024-12-15 05:50:24.829020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.347 [2024-12-15 05:50:24.837075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.347 [2024-12-15 05:50:24.837103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.347 [2024-12-15 05:50:24.847787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.347 [2024-12-15 05:50:24.847829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.347 [2024-12-15 05:50:24.856585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.347 [2024-12-15 05:50:24.856627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.347 [2024-12-15 05:50:24.867925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.347 [2024-12-15 05:50:24.867976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.347 [2024-12-15 05:50:24.876516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.347 [2024-12-15 05:50:24.876560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.347 [2024-12-15 05:50:24.886384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.347 [2024-12-15 05:50:24.886426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.347 [2024-12-15 05:50:24.895204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.347 [2024-12-15 05:50:24.895271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.347 [2024-12-15 05:50:24.904544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.347 [2024-12-15 05:50:24.904588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.347 [2024-12-15 05:50:24.913685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.347 [2024-12-15 05:50:24.913729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.347 [2024-12-15 05:50:24.922930] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.347 [2024-12-15 05:50:24.922973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.347 [2024-12-15 05:50:24.932465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.347 [2024-12-15 05:50:24.932516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.347 [2024-12-15 05:50:24.942007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.347 [2024-12-15 05:50:24.942054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.348 [2024-12-15 05:50:24.951862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.348 [2024-12-15 05:50:24.951941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.348 [2024-12-15 05:50:24.961493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.348 [2024-12-15 05:50:24.961544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.348 [2024-12-15 05:50:24.971954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.348 [2024-12-15 05:50:24.972010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.348 [2024-12-15 05:50:24.984027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.348 [2024-12-15 05:50:24.984073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.607 [2024-12-15 05:50:24.992864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.607 [2024-12-15 05:50:24.992932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.607 [2024-12-15 05:50:25.002848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.607 [2024-12-15 05:50:25.002918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.607 [2024-12-15 05:50:25.012922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.607 [2024-12-15 05:50:25.012964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.607 [2024-12-15 05:50:25.022594] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.607 [2024-12-15 05:50:25.022636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.607 [2024-12-15 05:50:25.032158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.607 [2024-12-15 05:50:25.032201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.607 [2024-12-15 05:50:25.041431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.607 [2024-12-15 05:50:25.041490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.607 [2024-12-15 05:50:25.050732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.607 [2024-12-15 05:50:25.050775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.607 [2024-12-15 05:50:25.060183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.607 [2024-12-15 05:50:25.060243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.607 [2024-12-15 05:50:25.069586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.607 [2024-12-15 05:50:25.069630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.607 [2024-12-15 05:50:25.079365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.607 [2024-12-15 05:50:25.079412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.607 [2024-12-15 05:50:25.088992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.607 [2024-12-15 05:50:25.089035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.607 [2024-12-15 05:50:25.098573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.607 [2024-12-15 05:50:25.098616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.607 [2024-12-15 05:50:25.108346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.607 [2024-12-15 05:50:25.108389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.607 [2024-12-15 05:50:25.117823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.607 [2024-12-15 05:50:25.117866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.607 [2024-12-15 05:50:25.127127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.607 [2024-12-15 05:50:25.127170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.607 [2024-12-15 05:50:25.136591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.607 [2024-12-15 05:50:25.136634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.607 [2024-12-15 05:50:25.145965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.607 [2024-12-15 05:50:25.146007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.607 [2024-12-15 05:50:25.155342] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.607 [2024-12-15 05:50:25.155386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.607 [2024-12-15 05:50:25.165159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.607 [2024-12-15 05:50:25.165204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.607 [2024-12-15 05:50:25.174543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.607 [2024-12-15 05:50:25.174586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.607 [2024-12-15 05:50:25.183738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.607 [2024-12-15 05:50:25.183781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.607 [2024-12-15 05:50:25.193119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.607 [2024-12-15 05:50:25.193163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.607 [2024-12-15 05:50:25.202389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.607 [2024-12-15 05:50:25.202432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.607 [2024-12-15 05:50:25.212137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.607 [2024-12-15 05:50:25.212179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.607 [2024-12-15 05:50:25.221565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.607 [2024-12-15 05:50:25.221608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.607 [2024-12-15 05:50:25.231059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.607 [2024-12-15 05:50:25.231102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.865 [2024-12-15 05:50:25.245599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.865 [2024-12-15 05:50:25.245642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.865 [2024-12-15 05:50:25.255107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.865 [2024-12-15 05:50:25.255153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.865 [2024-12-15 05:50:25.270118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.865 [2024-12-15 05:50:25.270164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.865 [2024-12-15 05:50:25.279718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.865 [2024-12-15 05:50:25.279762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.865 [2024-12-15 05:50:25.291786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.865 [2024-12-15 05:50:25.291831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.865 [2024-12-15 05:50:25.303160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.865 [2024-12-15 05:50:25.303205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.865 [2024-12-15 05:50:25.311939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.865 [2024-12-15 05:50:25.311993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.865 [2024-12-15 05:50:25.323590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.865 [2024-12-15 05:50:25.323633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.865 [2024-12-15 05:50:25.340749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.865 [2024-12-15 05:50:25.340793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.865 [2024-12-15 05:50:25.349751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.865 [2024-12-15 05:50:25.349794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.865 [2024-12-15 05:50:25.360960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.865 [2024-12-15 05:50:25.361003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.865 [2024-12-15 05:50:25.371486] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.865 [2024-12-15 05:50:25.371518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.865 [2024-12-15 05:50:25.379471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.865 [2024-12-15 05:50:25.379502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.865 [2024-12-15 05:50:25.390982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.865 [2024-12-15 05:50:25.391028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.865 [2024-12-15 05:50:25.400656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.865 [2024-12-15 05:50:25.400684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.865 [2024-12-15 05:50:25.410941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.865 [2024-12-15 05:50:25.410996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.865 [2024-12-15 05:50:25.420868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.865 [2024-12-15 05:50:25.420953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.865 [2024-12-15 05:50:25.430560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.865 [2024-12-15 05:50:25.430588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.865 [2024-12-15 05:50:25.441047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.865 [2024-12-15 05:50:25.441092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.865 [2024-12-15 05:50:25.451765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.865 [2024-12-15 05:50:25.451808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.865 [2024-12-15 05:50:25.462150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.865 [2024-12-15 05:50:25.462195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.865 [2024-12-15 05:50:25.471936] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.865 [2024-12-15 05:50:25.471989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.865 [2024-12-15 05:50:25.481512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.865 [2024-12-15 05:50:25.481555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.865 [2024-12-15 05:50:25.491220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.865 [2024-12-15 05:50:25.491286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.865 [2024-12-15 05:50:25.500617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.865 [2024-12-15 05:50:25.500663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.122 [2024-12-15 05:50:25.514343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.122 [2024-12-15 05:50:25.514387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.122 [2024-12-15 05:50:25.522985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.122 [2024-12-15 05:50:25.523028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.122 [2024-12-15 05:50:25.533896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.122 [2024-12-15 05:50:25.533951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.122 [2024-12-15 05:50:25.545306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.122 [2024-12-15 05:50:25.545349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.122 [2024-12-15 05:50:25.553386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.122 [2024-12-15 05:50:25.553430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.122 [2024-12-15 05:50:25.568048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.122 [2024-12-15 05:50:25.568092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.122 [2024-12-15 05:50:25.576346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.122 [2024-12-15 05:50:25.576390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.122 [2024-12-15 05:50:25.588234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.122 [2024-12-15 05:50:25.588279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.122 [2024-12-15 05:50:25.599653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.122 [2024-12-15 05:50:25.599697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.122 [2024-12-15 05:50:25.608388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.122 [2024-12-15 05:50:25.608432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.122 [2024-12-15 05:50:25.618513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.122 [2024-12-15 05:50:25.618557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.122 [2024-12-15 05:50:25.627997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.122 [2024-12-15 05:50:25.628040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.122 [2024-12-15 05:50:25.639433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.122 [2024-12-15 05:50:25.639478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.122 [2024-12-15 05:50:25.654337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.122 [2024-12-15 05:50:25.654380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.122 [2024-12-15 05:50:25.662408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.122 [2024-12-15 05:50:25.662452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.122 [2024-12-15 05:50:25.679030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.122 [2024-12-15 05:50:25.679073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.122 [2024-12-15 05:50:25.689657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.122 [2024-12-15 05:50:25.689699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.122 [2024-12-15 05:50:25.698559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.122 [2024-12-15 05:50:25.698603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.122 [2024-12-15 05:50:25.709608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.122 [2024-12-15 05:50:25.709652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.122 [2024-12-15 05:50:25.719943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.122 [2024-12-15 05:50:25.719996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.122 [2024-12-15 05:50:25.734060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.122 [2024-12-15 05:50:25.734104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.122 [2024-12-15 05:50:25.743165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.123 [2024-12-15 05:50:25.743225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.123 [2024-12-15 05:50:25.757106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.123 [2024-12-15 05:50:25.757150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.380 [2024-12-15 05:50:25.766384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.380 [2024-12-15 05:50:25.766428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.380 [2024-12-15 05:50:25.776545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.380 [2024-12-15 05:50:25.776588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.380 [2024-12-15 05:50:25.785952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.380 [2024-12-15 05:50:25.785996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.380 [2024-12-15 05:50:25.795072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.380 [2024-12-15 05:50:25.795114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.380 [2024-12-15 05:50:25.804654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.380 [2024-12-15 05:50:25.804697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.380 [2024-12-15 05:50:25.814425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.380 [2024-12-15 05:50:25.814469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.380 [2024-12-15 05:50:25.823830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.380 [2024-12-15 05:50:25.823873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.380 [2024-12-15 05:50:25.833343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.380 [2024-12-15 05:50:25.833387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.380 [2024-12-15 05:50:25.842805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.380 [2024-12-15 05:50:25.842848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.380 [2024-12-15 05:50:25.852210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.380 [2024-12-15 05:50:25.852254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.380 [2024-12-15 05:50:25.861771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.380 [2024-12-15 05:50:25.861813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.380 [2024-12-15 05:50:25.871358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.380 [2024-12-15 05:50:25.871404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.380 [2024-12-15 05:50:25.880697] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.380 [2024-12-15 05:50:25.880741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.380 [2024-12-15 05:50:25.890431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.380 [2024-12-15 05:50:25.890473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.380 [2024-12-15 05:50:25.899799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.380 [2024-12-15 05:50:25.899842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.380 [2024-12-15 05:50:25.909356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.380 [2024-12-15 05:50:25.909400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.380 [2024-12-15 05:50:25.919025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.380 [2024-12-15 05:50:25.919069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.380 [2024-12-15 05:50:25.932760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.380 [2024-12-15 05:50:25.932804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.380 [2024-12-15 05:50:25.941365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.380 [2024-12-15 05:50:25.941409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.380 [2024-12-15 05:50:25.953110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.380 [2024-12-15 05:50:25.953153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.380 [2024-12-15 05:50:25.964184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.380 [2024-12-15 05:50:25.964228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.380 [2024-12-15 05:50:25.972054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.380 [2024-12-15 05:50:25.972097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.380 [2024-12-15 05:50:25.984246] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.380 [2024-12-15 05:50:25.984290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.380 [2024-12-15 05:50:25.995124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.380 [2024-12-15 05:50:25.995168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.380 [2024-12-15 05:50:26.011432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.380 [2024-12-15 05:50:26.011501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.638 [2024-12-15 05:50:26.026575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.638 [2024-12-15 05:50:26.026639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.638 [2024-12-15 05:50:26.037954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.638 [2024-12-15 05:50:26.038007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.639 [2024-12-15 05:50:26.045956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.639 [2024-12-15 05:50:26.045999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.639 [2024-12-15 05:50:26.058594] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.639 [2024-12-15 05:50:26.058624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.639 [2024-12-15 05:50:26.069533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.639 [2024-12-15 05:50:26.069593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.639 [2024-12-15 05:50:26.080998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.639 [2024-12-15 05:50:26.081050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.639 [2024-12-15 05:50:26.091960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.639 [2024-12-15 05:50:26.091999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.639 [2024-12-15 05:50:26.107928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.639 [2024-12-15 05:50:26.107966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.639 [2024-12-15 05:50:26.122412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.639 [2024-12-15 05:50:26.122443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.639 [2024-12-15 05:50:26.131846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.639 [2024-12-15 05:50:26.131915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.639 [2024-12-15 05:50:26.142470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.639 [2024-12-15 05:50:26.142513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.639 [2024-12-15 05:50:26.153702] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.639 [2024-12-15 05:50:26.153745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.639 [2024-12-15 05:50:26.162530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.639 [2024-12-15 05:50:26.162573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.639 [2024-12-15 05:50:26.174606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.639 [2024-12-15 05:50:26.174649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.639 [2024-12-15 05:50:26.185727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.639 [2024-12-15 05:50:26.185770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.639 [2024-12-15 05:50:26.194084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.639 [2024-12-15 05:50:26.194127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.639 [2024-12-15 05:50:26.205412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.639 [2024-12-15 05:50:26.205454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.639 [2024-12-15 05:50:26.216435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.639 [2024-12-15 05:50:26.216477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.639 [2024-12-15 05:50:26.224300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.639 [2024-12-15 05:50:26.224342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.639 [2024-12-15 05:50:26.235004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.639 [2024-12-15 05:50:26.235047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.639 [2024-12-15 05:50:26.243340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.639 [2024-12-15 05:50:26.243384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.639 [2024-12-15 05:50:26.252805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.639 [2024-12-15 05:50:26.252848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.639 [2024-12-15 05:50:26.262063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.639 [2024-12-15 05:50:26.262105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.639 [2024-12-15 05:50:26.270708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.639 [2024-12-15 05:50:26.270752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.898 [2024-12-15 05:50:26.284836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.898 [2024-12-15 05:50:26.284879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.898 [2024-12-15 05:50:26.293426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.898 [2024-12-15 05:50:26.293470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.898 [2024-12-15 05:50:26.306956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.898 [2024-12-15 05:50:26.307013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.898 [2024-12-15 05:50:26.322107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.898 [2024-12-15 05:50:26.322163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.898 [2024-12-15 05:50:26.340763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.898 [2024-12-15 05:50:26.340810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.898 [2024-12-15 05:50:26.350876] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.898 [2024-12-15 05:50:26.350944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.898 [2024-12-15 05:50:26.364966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.898 [2024-12-15 05:50:26.365010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.898 [2024-12-15 05:50:26.373951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.898 [2024-12-15 05:50:26.373996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.898 [2024-12-15 05:50:26.388939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.898 [2024-12-15 05:50:26.389013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.898 [2024-12-15 05:50:26.400140] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.898 [2024-12-15 05:50:26.400184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.898 [2024-12-15 05:50:26.417392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.898 [2024-12-15 05:50:26.417438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.898 [2024-12-15 05:50:26.431942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.898 [2024-12-15 05:50:26.431998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.898 [2024-12-15 05:50:26.441341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.898 [2024-12-15 05:50:26.441384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.898 [2024-12-15 05:50:26.454299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.898 [2024-12-15 05:50:26.454340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.898 [2024-12-15 05:50:26.464253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.898 [2024-12-15 05:50:26.464298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.898 [2024-12-15 05:50:26.479060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.898 [2024-12-15 05:50:26.479102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.898 [2024-12-15 05:50:26.487615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.898 [2024-12-15 05:50:26.487674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.898 [2024-12-15 05:50:26.500293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.898 [2024-12-15 05:50:26.500336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.898 [2024-12-15 05:50:26.517402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.898 [2024-12-15 05:50:26.517446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.898 [2024-12-15 05:50:26.534122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.898 [2024-12-15 05:50:26.534181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.157 [2024-12-15 05:50:26.543275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.157 [2024-12-15 05:50:26.543322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.157 [2024-12-15 05:50:26.552741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.157 [2024-12-15 05:50:26.552784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.157 [2024-12-15 05:50:26.562108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.157 [2024-12-15 05:50:26.562151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.157 [2024-12-15 05:50:26.571111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.157 [2024-12-15 05:50:26.571155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.157 [2024-12-15 05:50:26.580427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.157 [2024-12-15 05:50:26.580470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.157 [2024-12-15 05:50:26.589920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.157 [2024-12-15 05:50:26.589974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.157 [2024-12-15 05:50:26.599271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.157 [2024-12-15 05:50:26.599315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.157 [2024-12-15 05:50:26.608365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.157 [2024-12-15 05:50:26.608408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.157 [2024-12-15 05:50:26.617665] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.157 [2024-12-15 05:50:26.617709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.157 [2024-12-15 05:50:26.626763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.157 [2024-12-15 05:50:26.626807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.157 [2024-12-15 05:50:26.635864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.157 [2024-12-15 05:50:26.635932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.157 [2024-12-15 05:50:26.645235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.157 [2024-12-15 05:50:26.645279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.157 [2024-12-15 05:50:26.654856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.157 [2024-12-15 05:50:26.654924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.157 [2024-12-15 05:50:26.664452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.157 [2024-12-15 05:50:26.664495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.157 [2024-12-15 05:50:26.673826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.157 [2024-12-15 05:50:26.673868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.157 [2024-12-15 05:50:26.682980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.157 [2024-12-15 05:50:26.683023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.157 [2024-12-15 05:50:26.692607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.157 [2024-12-15 05:50:26.692651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.157 [2024-12-15 05:50:26.702287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.157 [2024-12-15 05:50:26.702330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.157 [2024-12-15 05:50:26.715912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.157 [2024-12-15 05:50:26.715966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.157 [2024-12-15 05:50:26.724679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.157 [2024-12-15 05:50:26.724721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.157 [2024-12-15 05:50:26.734696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.157 [2024-12-15 05:50:26.734739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.157 [2024-12-15 05:50:26.744432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.157 [2024-12-15 05:50:26.744475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.157 [2024-12-15 05:50:26.754208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.157 [2024-12-15 05:50:26.754267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.157 [2024-12-15 05:50:26.763814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.157 [2024-12-15 05:50:26.763857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.157 [2024-12-15 05:50:26.773577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.157 [2024-12-15 05:50:26.773620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.157 [2024-12-15 05:50:26.784061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.157 [2024-12-15 05:50:26.784107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.415 [2024-12-15 05:50:26.796240] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.415 [2024-12-15 05:50:26.796285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.415 [2024-12-15 05:50:26.804422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.415 [2024-12-15 05:50:26.804466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.415 [2024-12-15 05:50:26.815604] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.415 [2024-12-15 05:50:26.815647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.415 [2024-12-15 05:50:26.825953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.415 [2024-12-15 05:50:26.825997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.415 [2024-12-15 05:50:26.836390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.415 [2024-12-15 05:50:26.836434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.415 [2024-12-15 05:50:26.848532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.415 [2024-12-15 05:50:26.848575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.415 [2024-12-15 05:50:26.866732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.415 [2024-12-15 05:50:26.866777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.415 [2024-12-15 05:50:26.882286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.415 [2024-12-15 05:50:26.882329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.415 [2024-12-15 05:50:26.893342] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.415 [2024-12-15 05:50:26.893386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.415 [2024-12-15 05:50:26.909799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.415 [2024-12-15 05:50:26.909843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.415 [2024-12-15 05:50:26.925434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.415 [2024-12-15 05:50:26.925478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.415 [2024-12-15 05:50:26.936290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.415 [2024-12-15 05:50:26.936333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.415 [2024-12-15 05:50:26.952106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.415 [2024-12-15 05:50:26.952150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.415 [2024-12-15 05:50:26.963200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.415 [2024-12-15 05:50:26.963253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.415 [2024-12-15 05:50:26.971489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.415 [2024-12-15 05:50:26.971550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.415 [2024-12-15 05:50:26.982562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.415 [2024-12-15 05:50:26.982607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.415 [2024-12-15 05:50:26.993614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.415 [2024-12-15 05:50:26.993642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.415 [2024-12-15 05:50:27.009098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.415 [2024-12-15 05:50:27.009142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.415 [2024-12-15 05:50:27.026011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.415 [2024-12-15 05:50:27.026055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.415 [2024-12-15 05:50:27.036769] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.415 [2024-12-15 05:50:27.036812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.415 [2024-12-15 05:50:27.044975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.415 [2024-12-15 05:50:27.045019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.673 [2024-12-15 05:50:27.057339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.673 [2024-12-15 05:50:27.057383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.673 [2024-12-15 05:50:27.066364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.673 [2024-12-15 05:50:27.066408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.673 [2024-12-15 05:50:27.077585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.673 [2024-12-15 05:50:27.077629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.673 [2024-12-15 05:50:27.088671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.673 [2024-12-15 05:50:27.088715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.673 [2024-12-15 05:50:27.097068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.673 [2024-12-15 05:50:27.097112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.673 [2024-12-15 05:50:27.108639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.673 [2024-12-15 05:50:27.108683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.673 [2024-12-15 05:50:27.120104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.673 [2024-12-15 05:50:27.120147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.673 [2024-12-15 05:50:27.128677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.673 [2024-12-15 05:50:27.128720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.673 [2024-12-15 05:50:27.138626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.673 [2024-12-15 05:50:27.138669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.673 [2024-12-15 05:50:27.147827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.673 [2024-12-15 05:50:27.147869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.673 [2024-12-15 05:50:27.157007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.673 [2024-12-15 05:50:27.157050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.673 [2024-12-15 05:50:27.166211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.673 [2024-12-15 05:50:27.166271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.673 [2024-12-15 05:50:27.175465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.673 [2024-12-15 05:50:27.175510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.673 [2024-12-15 05:50:27.184967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.673 [2024-12-15 05:50:27.185012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.673 [2024-12-15 05:50:27.194666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.673 [2024-12-15 05:50:27.194709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.673 [2024-12-15 05:50:27.204311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.674 [2024-12-15 05:50:27.204354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.674 [2024-12-15 05:50:27.213582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.674 [2024-12-15 05:50:27.213625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.674 [2024-12-15 05:50:27.223030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.674 [2024-12-15 05:50:27.223058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.674 [2024-12-15 05:50:27.232409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.674 [2024-12-15 05:50:27.232453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.674 [2024-12-15 05:50:27.241534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.674 [2024-12-15 05:50:27.241576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.674 [2024-12-15 05:50:27.250712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.674 [2024-12-15 05:50:27.250755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.674 [2024-12-15 05:50:27.260479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.674 [2024-12-15 05:50:27.260521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.674 [2024-12-15 05:50:27.269843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.674 [2024-12-15 05:50:27.269914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.674 [2024-12-15 05:50:27.279431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.674 [2024-12-15 05:50:27.279477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.674 [2024-12-15 05:50:27.288824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.674 [2024-12-15 05:50:27.288867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.674 [2024-12-15 05:50:27.298304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.674 [2024-12-15 05:50:27.298347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.674 [2024-12-15 05:50:27.307662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.674 [2024-12-15 05:50:27.307704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.932 [2024-12-15 05:50:27.321879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.932 [2024-12-15 05:50:27.321950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.932 [2024-12-15 05:50:27.332018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.932 [2024-12-15 05:50:27.332061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.932 [2024-12-15 05:50:27.342756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.932 [2024-12-15 05:50:27.342800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.933 [2024-12-15 05:50:27.353786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.933 [2024-12-15 05:50:27.353841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.933 [2024-12-15 05:50:27.370507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.933 [2024-12-15 05:50:27.370565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.933 [2024-12-15 05:50:27.382181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.933 [2024-12-15 05:50:27.382242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.933 [2024-12-15 05:50:27.398053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.933 [2024-12-15 05:50:27.398105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.933 [2024-12-15 05:50:27.415453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.933 [2024-12-15 05:50:27.415487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.933 [2024-12-15 05:50:27.432257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.933 [2024-12-15 05:50:27.432290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.933 [2024-12-15 05:50:27.442477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.933 [2024-12-15 05:50:27.442536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.933 [2024-12-15 05:50:27.454382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.933 [2024-12-15 05:50:27.454415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.933 [2024-12-15 05:50:27.465833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.933 [2024-12-15 05:50:27.465879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.933 [2024-12-15 05:50:27.478684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.933 [2024-12-15 05:50:27.478730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.933 [2024-12-15 05:50:27.489069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.933 [2024-12-15 05:50:27.489113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.933 [2024-12-15 05:50:27.500309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.933 [2024-12-15 05:50:27.500356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.933 [2024-12-15 05:50:27.510725] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.933 [2024-12-15 05:50:27.510767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.933 [2024-12-15 05:50:27.520921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.933 [2024-12-15 05:50:27.520974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.933 [2024-12-15 05:50:27.530392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.933 [2024-12-15 05:50:27.530435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.933 [2024-12-15 05:50:27.541975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.933 [2024-12-15 05:50:27.542017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.933 [2024-12-15 05:50:27.549827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.933 [2024-12-15 05:50:27.549869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.933 [2024-12-15 05:50:27.561039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.933 [2024-12-15 05:50:27.561082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.933 [2024-12-15 05:50:27.570222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.933 [2024-12-15 05:50:27.570266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.192 [2024-12-15 05:50:27.581427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.192 [2024-12-15 05:50:27.581470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.192 [2024-12-15 05:50:27.589780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.192 [2024-12-15 05:50:27.589823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.192 [2024-12-15 05:50:27.600657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.192 [2024-12-15 05:50:27.600699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.192 [2024-12-15 05:50:27.611913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.192 [2024-12-15 05:50:27.611965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.192 [2024-12-15 05:50:27.619726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.192 [2024-12-15 05:50:27.619770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.192 [2024-12-15 05:50:27.631141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.192 [2024-12-15 05:50:27.631188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.192 [2024-12-15 05:50:27.646209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.192 [2024-12-15 05:50:27.646266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.192 [2024-12-15 05:50:27.664169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.192 [2024-12-15 05:50:27.664242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.192 [2024-12-15 05:50:27.679303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.192 [2024-12-15 05:50:27.679350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.192 [2024-12-15 05:50:27.687363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.192 [2024-12-15 05:50:27.687410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.192 [2024-12-15 05:50:27.699084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.192 [2024-12-15 05:50:27.699127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.192 [2024-12-15 05:50:27.710110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.192 [2024-12-15 05:50:27.710154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.192 [2024-12-15 05:50:27.718173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.192 [2024-12-15 05:50:27.718218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.192 [2024-12-15 05:50:27.730501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.192 [2024-12-15 05:50:27.730529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.192 [2024-12-15 05:50:27.741374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.192 [2024-12-15 05:50:27.741418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.192 [2024-12-15 05:50:27.749429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.192 [2024-12-15 05:50:27.749473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.192 [2024-12-15 05:50:27.761063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.192 [2024-12-15 05:50:27.761106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.192 [2024-12-15 05:50:27.778737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.192 [2024-12-15 05:50:27.778781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.192 [2024-12-15 05:50:27.790289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.192 [2024-12-15 05:50:27.790335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.192 [2024-12-15 05:50:27.798676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.192 [2024-12-15 05:50:27.798719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.192 [2024-12-15 05:50:27.810468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.192 [2024-12-15 05:50:27.810512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.192 [2024-12-15 05:50:27.822012] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.192 [2024-12-15 05:50:27.822056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.450 [2024-12-15 05:50:27.837460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.450 [2024-12-15 05:50:27.837489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.450 [2024-12-15 05:50:27.855821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.450 [2024-12-15 05:50:27.855866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.450 [2024-12-15 05:50:27.866394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.450 [2024-12-15 05:50:27.866425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.450 [2024-12-15 05:50:27.877631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.450 [2024-12-15 05:50:27.877677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.451 [2024-12-15 05:50:27.888033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.451 [2024-12-15 05:50:27.888078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.451 [2024-12-15 05:50:27.898253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.451 [2024-12-15 05:50:27.898297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.451 [2024-12-15 05:50:27.908108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.451 [2024-12-15 05:50:27.908152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.451 [2024-12-15 05:50:27.918163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.451 [2024-12-15 05:50:27.918209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.451 [2024-12-15 05:50:27.928856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.451 [2024-12-15 05:50:27.928926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.451 [2024-12-15 05:50:27.940738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.451 [2024-12-15 05:50:27.940783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.451 [2024-12-15 05:50:27.949704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.451 [2024-12-15 05:50:27.949748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.451 [2024-12-15 05:50:27.959792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.451 [2024-12-15 05:50:27.959837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.451 [2024-12-15 05:50:27.969279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.451 [2024-12-15 05:50:27.969323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.451 [2024-12-15 05:50:27.978993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.451 [2024-12-15 05:50:27.979021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.451 [2024-12-15 05:50:27.988614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.451 [2024-12-15 05:50:27.988659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.451 [2024-12-15 05:50:28.002140] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.451 [2024-12-15 05:50:28.002187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.451 [2024-12-15 05:50:28.010621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.451 [2024-12-15 05:50:28.010665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.451 [2024-12-15 05:50:28.022272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.451 [2024-12-15 05:50:28.022316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.451 [2024-12-15 05:50:28.033233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.451 [2024-12-15 05:50:28.033292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.451 [2024-12-15 05:50:28.041699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.451 [2024-12-15 05:50:28.041743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.451 [2024-12-15 05:50:28.052374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.451 [2024-12-15 05:50:28.052417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.451 [2024-12-15 05:50:28.063775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.451 [2024-12-15 05:50:28.063819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.451 [2024-12-15 05:50:28.072451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.451 [2024-12-15 05:50:28.072495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.451 [2024-12-15 05:50:28.082414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.451 [2024-12-15 05:50:28.082459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.710 [2024-12-15 05:50:28.096553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.710 [2024-12-15 05:50:28.096597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.710 [2024-12-15 05:50:28.104700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.710 [2024-12-15 05:50:28.104744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.710 [2024-12-15 05:50:28.116515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.710 [2024-12-15 05:50:28.116558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.710 [2024-12-15 05:50:28.127975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.710 [2024-12-15 05:50:28.128002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.710 [2024-12-15 05:50:28.136625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.710 [2024-12-15 05:50:28.136670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.710 [2024-12-15 05:50:28.146744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.710 [2024-12-15 05:50:28.146788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.710 [2024-12-15 05:50:28.156427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.710 [2024-12-15 05:50:28.156471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.710 [2024-12-15 05:50:28.167510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.710 [2024-12-15 05:50:28.167570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.710 [2024-12-15 05:50:28.176247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.710 [2024-12-15 05:50:28.176290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.710 [2024-12-15 05:50:28.185990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.710 [2024-12-15 05:50:28.186027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.710 [2024-12-15 05:50:28.195523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.710 [2024-12-15 05:50:28.195568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.710 [2024-12-15 05:50:28.205022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.710 [2024-12-15 05:50:28.205067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.710 [2024-12-15 05:50:28.214987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.710 [2024-12-15 05:50:28.215032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.710 [2024-12-15 05:50:28.224606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.710 [2024-12-15 05:50:28.224651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.710 [2024-12-15 05:50:28.234181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.710 [2024-12-15 05:50:28.234269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.710 [2024-12-15 05:50:28.247895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.710 [2024-12-15 05:50:28.247957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.710 [2024-12-15 05:50:28.256138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.710 [2024-12-15 05:50:28.256190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.710 [2024-12-15 05:50:28.267945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.710 [2024-12-15 05:50:28.268008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.710 [2024-12-15 05:50:28.278913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.710 [2024-12-15 05:50:28.278965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.710 [2024-12-15 05:50:28.287844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.710 [2024-12-15 05:50:28.287913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.710 [2024-12-15 05:50:28.300121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.710 [2024-12-15 05:50:28.300164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.710 [2024-12-15 05:50:28.315788] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.710 [2024-12-15 05:50:28.315843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.710 [2024-12-15 05:50:28.326357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.710 [2024-12-15 05:50:28.326389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.710 [2024-12-15 05:50:28.338344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.710 [2024-12-15 05:50:28.338376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.969 [2024-12-15 05:50:28.353197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.969 [2024-12-15 05:50:28.353260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.969 [2024-12-15 05:50:28.369657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.969 [2024-12-15 05:50:28.369704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.969 [2024-12-15 05:50:28.379713] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.969 [2024-12-15 05:50:28.379756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.969 [2024-12-15 05:50:28.394023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.969 [2024-12-15 05:50:28.394088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.969 [2024-12-15 05:50:28.405832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.969 [2024-12-15 05:50:28.405860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.969 [2024-12-15 05:50:28.414500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.969 [2024-12-15 05:50:28.414533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.969 [2024-12-15 05:50:28.425954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.969 [2024-12-15 05:50:28.426027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.969 [2024-12-15 05:50:28.437998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.970 [2024-12-15 05:50:28.438041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.970 [2024-12-15 05:50:28.456011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.970 [2024-12-15 05:50:28.456059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.970 [2024-12-15 05:50:28.471272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.970 [2024-12-15 05:50:28.471304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.970 [2024-12-15 05:50:28.481445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.970 [2024-12-15 05:50:28.481474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.970 [2024-12-15 05:50:28.492979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.970 [2024-12-15 05:50:28.493010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.970 [2024-12-15 05:50:28.508303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.970 [2024-12-15 05:50:28.508335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.970 [2024-12-15 05:50:28.518017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.970 [2024-12-15 05:50:28.518063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.970 [2024-12-15 05:50:28.533413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.970 [2024-12-15 05:50:28.533457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.970 [2024-12-15 05:50:28.549445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.970 [2024-12-15 05:50:28.549489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.970 [2024-12-15 05:50:28.559073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.970 [2024-12-15 05:50:28.559102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.970 [2024-12-15 05:50:28.573457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.970 [2024-12-15 05:50:28.573489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.970 [2024-12-15 05:50:28.584069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.970 [2024-12-15 05:50:28.584100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.970 [2024-12-15 05:50:28.598639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.970 [2024-12-15 05:50:28.598687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.229 [2024-12-15 05:50:28.609163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.229 [2024-12-15 05:50:28.609210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.229 [2024-12-15 05:50:28.624638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.229 [2024-12-15 05:50:28.624685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.229 [2024-12-15 05:50:28.640632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.229 [2024-12-15 05:50:28.640677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.229 [2024-12-15 05:50:28.652218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.229 [2024-12-15 05:50:28.652289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.229 [2024-12-15 05:50:28.668607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.229 [2024-12-15 05:50:28.668652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.229 [2024-12-15 05:50:28.685387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.229 [2024-12-15 05:50:28.685431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.229 [2024-12-15 05:50:28.694514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.229 [2024-12-15 05:50:28.694557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.229 [2024-12-15 05:50:28.710031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.229 [2024-12-15 05:50:28.710074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.229 [2024-12-15 05:50:28.720540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.229 [2024-12-15 05:50:28.720582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.229 [2024-12-15 05:50:28.728667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.229 [2024-12-15 05:50:28.728710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.229 [2024-12-15 05:50:28.740459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.229 [2024-12-15 05:50:28.740502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.229 [2024-12-15 05:50:28.751852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.229 [2024-12-15 05:50:28.751907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.229 [2024-12-15 05:50:28.760360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.229 [2024-12-15 05:50:28.760403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.229 [2024-12-15 05:50:28.770195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.229 [2024-12-15 05:50:28.770238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.229 [2024-12-15 05:50:28.779393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.229 [2024-12-15 05:50:28.779440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.229 [2024-12-15 05:50:28.789196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.229 [2024-12-15 05:50:28.789240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.229 [2024-12-15 05:50:28.798578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.229 [2024-12-15 05:50:28.798639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.229 [2024-12-15 05:50:28.808518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.229 [2024-12-15 05:50:28.808574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.229 [2024-12-15 05:50:28.818360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.229 [2024-12-15 05:50:28.818406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.229 [2024-12-15 05:50:28.828382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.229 [2024-12-15 05:50:28.828439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.229 [2024-12-15 05:50:28.842759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.229 [2024-12-15 05:50:28.842810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.229 [2024-12-15 05:50:28.860914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.229 [2024-12-15 05:50:28.860985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.489 [2024-12-15 05:50:28.876251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.489 [2024-12-15 05:50:28.876310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.489 [2024-12-15 05:50:28.892502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.489 [2024-12-15 05:50:28.892566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.489 [2024-12-15 05:50:28.901547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.489 [2024-12-15 05:50:28.901596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.489 [2024-12-15 05:50:28.912902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.489 [2024-12-15 05:50:28.912967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.489 [2024-12-15 05:50:28.924145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.489 [2024-12-15 05:50:28.924189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.489 [2024-12-15 05:50:28.940523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.489 [2024-12-15 05:50:28.940593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.489 [2024-12-15 05:50:28.955091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.489 [2024-12-15 05:50:28.955129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.489 [2024-12-15 05:50:28.964403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.489 [2024-12-15 05:50:28.964435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.489 [2024-12-15 05:50:28.974913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.489 [2024-12-15 05:50:28.974967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.489 [2024-12-15 05:50:28.985443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.489 [2024-12-15 05:50:28.985490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.489 [2024-12-15 05:50:28.996104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.489 [2024-12-15 05:50:28.996137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.489 [2024-12-15 05:50:29.007903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.489 [2024-12-15 05:50:29.007945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.489 [2024-12-15 05:50:29.017044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.489 [2024-12-15 05:50:29.017074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.489 [2024-12-15 05:50:29.027514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.489 [2024-12-15 05:50:29.027577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.489 [2024-12-15 05:50:29.039112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.489 [2024-12-15 05:50:29.039158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.489 [2024-12-15 05:50:29.047674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.489 [2024-12-15 05:50:29.047718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.489 [2024-12-15 05:50:29.057701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.489 [2024-12-15 05:50:29.057746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.489 [2024-12-15 05:50:29.066900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.489 [2024-12-15 05:50:29.066954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.489 [2024-12-15 05:50:29.076072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.489 [2024-12-15 05:50:29.076117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.489 [2024-12-15 05:50:29.085668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.489 [2024-12-15 05:50:29.085712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.489 [2024-12-15 05:50:29.099077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.489 [2024-12-15 05:50:29.099122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.489 [2024-12-15 05:50:29.107062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.489 [2024-12-15 05:50:29.107107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.489 [2024-12-15 05:50:29.118043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.489 [2024-12-15 05:50:29.118086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.748 [2024-12-15 05:50:29.129724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.748 [2024-12-15 05:50:29.129769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.748 [2024-12-15 05:50:29.138475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.748 [2024-12-15 05:50:29.138520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.748 [2024-12-15 05:50:29.152521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.748 [2024-12-15 05:50:29.152566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.748 [2024-12-15 05:50:29.160949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.748 [2024-12-15 05:50:29.160993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.748 [2024-12-15 05:50:29.172900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.748 [2024-12-15 05:50:29.172954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.748 [2024-12-15 05:50:29.181746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.748 [2024-12-15 05:50:29.181790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.748 [2024-12-15 05:50:29.191483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.748 [2024-12-15 05:50:29.191529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.748 [2024-12-15 05:50:29.200828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.748 [2024-12-15 05:50:29.200872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.748 [2024-12-15 05:50:29.210200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.748 [2024-12-15 05:50:29.210245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.748 [2024-12-15 05:50:29.219865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.748 [2024-12-15 05:50:29.219934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.748 [2024-12-15 05:50:29.229235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.748 [2024-12-15 05:50:29.229295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.748 [2024-12-15 05:50:29.238585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.748 [2024-12-15 05:50:29.238630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.748 [2024-12-15 05:50:29.247963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.748 [2024-12-15 05:50:29.248018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.748 [2024-12-15 05:50:29.257304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.748 [2024-12-15 05:50:29.257348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.748 [2024-12-15 05:50:29.266307] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.748 [2024-12-15 05:50:29.266350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.748 [2024-12-15 05:50:29.275648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.748 [2024-12-15 05:50:29.275692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.748 [2024-12-15 05:50:29.284983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.748 [2024-12-15 05:50:29.285027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.748 [2024-12-15 05:50:29.298524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.748 [2024-12-15 05:50:29.298569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.748 [2024-12-15 05:50:29.306904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.748 [2024-12-15 05:50:29.306957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.748 [2024-12-15 05:50:29.318633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.748 [2024-12-15 05:50:29.318693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.748 [2024-12-15 05:50:29.329581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.748 [2024-12-15 05:50:29.329625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.748 [2024-12-15 05:50:29.337665] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.748 [2024-12-15 05:50:29.337709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.748 [2024-12-15 05:50:29.349335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.748 [2024-12-15 05:50:29.349379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.748 [2024-12-15 05:50:29.360954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.748 [2024-12-15 05:50:29.361008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.748 [2024-12-15 05:50:29.369108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.748 [2024-12-15 05:50:29.369153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.748 00:10:07.748 Latency(us) 00:10:07.748 [2024-12-15T05:50:29.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.748 [2024-12-15T05:50:29.389Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:07.748 Nvme1n1 : 5.01 12889.02 100.70 0.00 0.00 9920.54 4021.53 24784.52 00:10:07.749 [2024-12-15T05:50:29.390Z] =================================================================================================================== 00:10:07.749 [2024-12-15T05:50:29.390Z] Total : 12889.02 100.70 0.00 0.00 9920.54 4021.53 24784.52 00:10:07.749 [2024-12-15 05:50:29.375910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.749 [2024-12-15 05:50:29.375963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.749 [2024-12-15 05:50:29.383908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.749 [2024-12-15 05:50:29.383961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.007 [2024-12-15 05:50:29.391958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.007 [2024-12-15 05:50:29.392018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.007 [2024-12-15 05:50:29.399943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.007 [2024-12-15 05:50:29.400001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.007 [2024-12-15 05:50:29.407948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.007 [2024-12-15 05:50:29.408008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.007 [2024-12-15 05:50:29.415973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.007 [2024-12-15 05:50:29.416021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.007 [2024-12-15 05:50:29.423975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.007 [2024-12-15 05:50:29.424022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.007 [2024-12-15 05:50:29.436034] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.007 [2024-12-15 05:50:29.436087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.007 [2024-12-15 05:50:29.443973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.007 [2024-12-15 05:50:29.444016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.007 [2024-12-15 05:50:29.451965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.007 [2024-12-15 05:50:29.451988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.007 [2024-12-15 05:50:29.463991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.007 [2024-12-15 05:50:29.464041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.007 [2024-12-15 05:50:29.475964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.008 [2024-12-15 05:50:29.476000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.008 [2024-12-15 05:50:29.492032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.008 [2024-12-15 05:50:29.492063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.008 [2024-12-15 05:50:29.499980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.008 [2024-12-15 05:50:29.500031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.008 [2024-12-15 05:50:29.507987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.008 [2024-12-15 05:50:29.508037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.008 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (74450) - No such process 00:10:08.008 05:50:29 -- target/zcopy.sh@49 -- # wait 74450 00:10:08.008 05:50:29 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.008 05:50:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.008 05:50:29 -- common/autotest_common.sh@10 -- # set +x 00:10:08.008 05:50:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.008 05:50:29 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:08.008 05:50:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.008 05:50:29 -- common/autotest_common.sh@10 -- # set +x 00:10:08.008 delay0 00:10:08.008 05:50:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.008 05:50:29 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:08.008 05:50:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.008 05:50:29 -- common/autotest_common.sh@10 -- # set +x 00:10:08.008 05:50:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.008 05:50:29 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:08.266 [2024-12-15 05:50:29.695471] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:14.830 Initializing NVMe Controllers 00:10:14.830 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:14.830 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:14.830 Initialization complete. Launching workers. 00:10:14.830 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 827 00:10:14.830 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1114, failed to submit 33 00:10:14.830 success 1006, unsuccess 108, failed 0 00:10:14.830 05:50:35 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:14.830 05:50:35 -- target/zcopy.sh@60 -- # nvmftestfini 00:10:14.830 05:50:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:14.830 05:50:35 -- nvmf/common.sh@116 -- # sync 00:10:14.830 05:50:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:14.830 05:50:35 -- nvmf/common.sh@119 -- # set +e 00:10:14.830 05:50:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:14.830 05:50:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:14.830 rmmod nvme_tcp 00:10:14.830 rmmod nvme_fabrics 00:10:14.830 rmmod nvme_keyring 00:10:14.830 05:50:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:14.830 05:50:35 -- nvmf/common.sh@123 -- # set -e 00:10:14.830 05:50:35 -- nvmf/common.sh@124 -- # return 0 00:10:14.830 05:50:35 -- nvmf/common.sh@477 -- # '[' -n 74305 ']' 00:10:14.830 05:50:35 -- nvmf/common.sh@478 -- # killprocess 74305 00:10:14.830 05:50:35 -- common/autotest_common.sh@936 -- # '[' -z 74305 ']' 00:10:14.830 05:50:35 -- common/autotest_common.sh@940 -- # kill -0 74305 00:10:14.830 05:50:35 -- common/autotest_common.sh@941 -- # uname 00:10:14.830 05:50:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:14.830 05:50:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74305 00:10:14.830 05:50:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:14.830 05:50:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:14.830 05:50:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74305' 00:10:14.830 killing process with pid 74305 00:10:14.830 05:50:35 -- common/autotest_common.sh@955 -- # kill 74305 00:10:14.830 05:50:35 -- common/autotest_common.sh@960 -- # wait 74305 00:10:14.830 05:50:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:14.830 05:50:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:14.830 05:50:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:14.830 05:50:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:14.830 05:50:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:14.830 05:50:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.830 05:50:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:14.830 05:50:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.830 05:50:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:14.830 00:10:14.830 real 0m24.372s 00:10:14.830 user 0m40.061s 00:10:14.830 sys 0m6.503s 00:10:14.830 05:50:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:14.830 05:50:36 -- common/autotest_common.sh@10 -- # set +x 00:10:14.830 ************************************ 00:10:14.830 END TEST nvmf_zcopy 00:10:14.830 ************************************ 00:10:14.830 05:50:36 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:14.830 05:50:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:14.830 05:50:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:14.830 05:50:36 -- common/autotest_common.sh@10 -- # set +x 00:10:14.830 ************************************ 00:10:14.830 START TEST nvmf_nmic 00:10:14.830 ************************************ 00:10:14.830 05:50:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:14.831 * Looking for test storage... 00:10:14.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:14.831 05:50:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:14.831 05:50:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:14.831 05:50:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:14.831 05:50:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:14.831 05:50:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:14.831 05:50:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:14.831 05:50:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:14.831 05:50:36 -- scripts/common.sh@335 -- # IFS=.-: 00:10:14.831 05:50:36 -- scripts/common.sh@335 -- # read -ra ver1 00:10:14.831 05:50:36 -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.831 05:50:36 -- scripts/common.sh@336 -- # read -ra ver2 00:10:14.831 05:50:36 -- scripts/common.sh@337 -- # local 'op=<' 00:10:14.831 05:50:36 -- scripts/common.sh@339 -- # ver1_l=2 00:10:14.831 05:50:36 -- scripts/common.sh@340 -- # ver2_l=1 00:10:14.831 05:50:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:14.831 05:50:36 -- scripts/common.sh@343 -- # case "$op" in 00:10:14.831 05:50:36 -- scripts/common.sh@344 -- # : 1 00:10:14.831 05:50:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:14.831 05:50:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.831 05:50:36 -- scripts/common.sh@364 -- # decimal 1 00:10:14.831 05:50:36 -- scripts/common.sh@352 -- # local d=1 00:10:14.831 05:50:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.831 05:50:36 -- scripts/common.sh@354 -- # echo 1 00:10:14.831 05:50:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:14.831 05:50:36 -- scripts/common.sh@365 -- # decimal 2 00:10:14.831 05:50:36 -- scripts/common.sh@352 -- # local d=2 00:10:14.831 05:50:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.831 05:50:36 -- scripts/common.sh@354 -- # echo 2 00:10:14.831 05:50:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:14.831 05:50:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:14.831 05:50:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:14.831 05:50:36 -- scripts/common.sh@367 -- # return 0 00:10:14.831 05:50:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.831 05:50:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:14.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.831 --rc genhtml_branch_coverage=1 00:10:14.831 --rc genhtml_function_coverage=1 00:10:14.831 --rc genhtml_legend=1 00:10:14.831 --rc geninfo_all_blocks=1 00:10:14.831 --rc geninfo_unexecuted_blocks=1 00:10:14.831 00:10:14.831 ' 00:10:14.831 05:50:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:14.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.831 --rc genhtml_branch_coverage=1 00:10:14.831 --rc genhtml_function_coverage=1 00:10:14.831 --rc genhtml_legend=1 00:10:14.831 --rc geninfo_all_blocks=1 00:10:14.831 --rc geninfo_unexecuted_blocks=1 00:10:14.831 00:10:14.831 ' 00:10:14.831 05:50:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:14.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.831 --rc genhtml_branch_coverage=1 00:10:14.831 --rc genhtml_function_coverage=1 00:10:14.831 --rc genhtml_legend=1 00:10:14.831 --rc geninfo_all_blocks=1 00:10:14.831 --rc geninfo_unexecuted_blocks=1 00:10:14.831 00:10:14.831 ' 00:10:14.831 05:50:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:14.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.831 --rc genhtml_branch_coverage=1 00:10:14.831 --rc genhtml_function_coverage=1 00:10:14.831 --rc genhtml_legend=1 00:10:14.831 --rc geninfo_all_blocks=1 00:10:14.831 --rc geninfo_unexecuted_blocks=1 00:10:14.831 00:10:14.831 ' 00:10:14.831 05:50:36 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:14.831 05:50:36 -- nvmf/common.sh@7 -- # uname -s 00:10:14.831 05:50:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.831 05:50:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.831 05:50:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.831 05:50:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.831 05:50:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.831 05:50:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.831 05:50:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.831 05:50:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.831 05:50:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.831 05:50:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.831 05:50:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:10:14.831 05:50:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:10:14.831 05:50:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.831 05:50:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.831 05:50:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:14.831 05:50:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:14.831 05:50:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.831 05:50:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.831 05:50:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.831 05:50:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.831 05:50:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.831 05:50:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.831 05:50:36 -- paths/export.sh@5 -- # export PATH 00:10:14.831 05:50:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.831 05:50:36 -- nvmf/common.sh@46 -- # : 0 00:10:14.831 05:50:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:14.831 05:50:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:14.831 05:50:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:14.831 05:50:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.831 05:50:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.831 05:50:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:14.831 05:50:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:14.831 05:50:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:14.831 05:50:36 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:14.831 05:50:36 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:14.831 05:50:36 -- target/nmic.sh@14 -- # nvmftestinit 00:10:14.831 05:50:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:14.831 05:50:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.831 05:50:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:14.831 05:50:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:14.831 05:50:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:14.831 05:50:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.831 05:50:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:14.831 05:50:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.831 05:50:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:14.831 05:50:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:14.831 05:50:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:14.831 05:50:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:14.831 05:50:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:14.831 05:50:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:14.831 05:50:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.831 05:50:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.831 05:50:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:14.831 05:50:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:14.831 05:50:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:14.831 05:50:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:14.831 05:50:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:14.831 05:50:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.831 05:50:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:14.831 05:50:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:14.831 05:50:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:14.831 05:50:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:14.831 05:50:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:14.831 05:50:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:14.831 Cannot find device "nvmf_tgt_br" 00:10:14.831 05:50:36 -- nvmf/common.sh@154 -- # true 00:10:14.831 05:50:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:14.831 Cannot find device "nvmf_tgt_br2" 00:10:14.831 05:50:36 -- nvmf/common.sh@155 -- # true 00:10:14.831 05:50:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:15.090 05:50:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:15.090 Cannot find device "nvmf_tgt_br" 00:10:15.090 05:50:36 -- nvmf/common.sh@157 -- # true 00:10:15.090 05:50:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:15.090 Cannot find device "nvmf_tgt_br2" 00:10:15.090 05:50:36 -- nvmf/common.sh@158 -- # true 00:10:15.090 05:50:36 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:15.090 05:50:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:15.090 05:50:36 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:15.090 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:15.090 05:50:36 -- nvmf/common.sh@161 -- # true 00:10:15.090 05:50:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:15.090 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:15.090 05:50:36 -- nvmf/common.sh@162 -- # true 00:10:15.090 05:50:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:15.090 05:50:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:15.090 05:50:36 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:15.090 05:50:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:15.090 05:50:36 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:15.090 05:50:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:15.090 05:50:36 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:15.090 05:50:36 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:15.090 05:50:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:15.090 05:50:36 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:15.090 05:50:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:15.090 05:50:36 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:15.090 05:50:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:15.090 05:50:36 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:15.090 05:50:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:15.090 05:50:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:15.090 05:50:36 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:15.090 05:50:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:15.090 05:50:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:15.090 05:50:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:15.090 05:50:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:15.090 05:50:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:15.090 05:50:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:15.349 05:50:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:15.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:15.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:10:15.349 00:10:15.349 --- 10.0.0.2 ping statistics --- 00:10:15.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.349 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:10:15.349 05:50:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:15.349 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:15.349 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:10:15.349 00:10:15.349 --- 10.0.0.3 ping statistics --- 00:10:15.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.349 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:15.349 05:50:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:15.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:15.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:10:15.349 00:10:15.349 --- 10.0.0.1 ping statistics --- 00:10:15.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.349 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:10:15.349 05:50:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:15.349 05:50:36 -- nvmf/common.sh@421 -- # return 0 00:10:15.349 05:50:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:15.349 05:50:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:15.349 05:50:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:15.349 05:50:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:15.349 05:50:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:15.349 05:50:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:15.349 05:50:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:15.349 05:50:36 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:15.349 05:50:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:15.349 05:50:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:15.349 05:50:36 -- common/autotest_common.sh@10 -- # set +x 00:10:15.349 05:50:36 -- nvmf/common.sh@469 -- # nvmfpid=74782 00:10:15.349 05:50:36 -- nvmf/common.sh@470 -- # waitforlisten 74782 00:10:15.349 05:50:36 -- common/autotest_common.sh@829 -- # '[' -z 74782 ']' 00:10:15.349 05:50:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.349 05:50:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:15.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.349 05:50:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:15.349 05:50:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.349 05:50:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:15.349 05:50:36 -- common/autotest_common.sh@10 -- # set +x 00:10:15.349 [2024-12-15 05:50:36.817284] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:15.349 [2024-12-15 05:50:36.817698] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.349 [2024-12-15 05:50:36.952813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:15.612 [2024-12-15 05:50:36.995441] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:15.612 [2024-12-15 05:50:36.995632] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.612 [2024-12-15 05:50:36.995659] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.612 [2024-12-15 05:50:36.995670] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.612 [2024-12-15 05:50:36.996358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.612 [2024-12-15 05:50:36.996510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.612 [2024-12-15 05:50:36.996620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.612 [2024-12-15 05:50:36.996781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.585 05:50:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:16.585 05:50:37 -- common/autotest_common.sh@862 -- # return 0 00:10:16.585 05:50:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:16.585 05:50:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:16.585 05:50:37 -- common/autotest_common.sh@10 -- # set +x 00:10:16.585 05:50:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.585 05:50:37 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:16.585 05:50:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.585 05:50:37 -- common/autotest_common.sh@10 -- # set +x 00:10:16.585 [2024-12-15 05:50:37.904973] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:16.585 05:50:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.585 05:50:37 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:16.585 05:50:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.585 05:50:37 -- common/autotest_common.sh@10 -- # set +x 00:10:16.585 Malloc0 00:10:16.585 05:50:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.585 05:50:37 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:16.585 05:50:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.585 05:50:37 -- common/autotest_common.sh@10 -- # set +x 00:10:16.585 05:50:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.585 05:50:37 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:16.585 05:50:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.585 05:50:37 -- common/autotest_common.sh@10 -- # set +x 00:10:16.585 05:50:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.585 05:50:37 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:16.585 05:50:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.585 05:50:37 -- common/autotest_common.sh@10 -- # set +x 00:10:16.585 [2024-12-15 05:50:37.965506] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:16.585 05:50:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.585 05:50:37 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:16.585 test case1: single bdev can't be used in multiple subsystems 00:10:16.585 05:50:37 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:16.585 05:50:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.585 05:50:37 -- common/autotest_common.sh@10 -- # set +x 00:10:16.585 05:50:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.585 05:50:37 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:16.585 05:50:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.585 05:50:37 -- common/autotest_common.sh@10 -- # set +x 00:10:16.585 05:50:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.585 05:50:37 -- target/nmic.sh@28 -- # nmic_status=0 00:10:16.585 05:50:37 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:16.585 05:50:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.585 05:50:37 -- common/autotest_common.sh@10 -- # set +x 00:10:16.585 [2024-12-15 05:50:37.989301] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:16.585 [2024-12-15 05:50:37.989368] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:16.585 [2024-12-15 05:50:37.989379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.585 request: 00:10:16.585 { 00:10:16.585 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:16.585 "namespace": { 00:10:16.585 "bdev_name": "Malloc0" 00:10:16.585 }, 00:10:16.585 "method": "nvmf_subsystem_add_ns", 00:10:16.585 "req_id": 1 00:10:16.585 } 00:10:16.585 Got JSON-RPC error response 00:10:16.585 response: 00:10:16.585 { 00:10:16.585 "code": -32602, 00:10:16.585 "message": "Invalid parameters" 00:10:16.585 } 00:10:16.585 05:50:37 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:16.585 05:50:37 -- target/nmic.sh@29 -- # nmic_status=1 00:10:16.585 05:50:37 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:16.585 Adding namespace failed - expected result. 00:10:16.585 05:50:37 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:16.585 test case2: host connect to nvmf target in multiple paths 00:10:16.585 05:50:37 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:16.585 05:50:37 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:16.585 05:50:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.585 05:50:37 -- common/autotest_common.sh@10 -- # set +x 00:10:16.585 [2024-12-15 05:50:38.001373] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:16.585 05:50:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.585 05:50:38 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 --hostid=926ec8f8-6baf-4857-8f2f-72d8639146f9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:16.585 05:50:38 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 --hostid=926ec8f8-6baf-4857-8f2f-72d8639146f9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:16.844 05:50:38 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:16.844 05:50:38 -- common/autotest_common.sh@1187 -- # local i=0 00:10:16.844 05:50:38 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:16.844 05:50:38 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:10:16.844 05:50:38 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:18.746 05:50:40 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:18.746 05:50:40 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:18.746 05:50:40 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:10:18.746 05:50:40 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:10:18.746 05:50:40 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:18.746 05:50:40 -- common/autotest_common.sh@1197 -- # return 0 00:10:18.746 05:50:40 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:18.746 [global] 00:10:18.746 thread=1 00:10:18.746 invalidate=1 00:10:18.746 rw=write 00:10:18.746 time_based=1 00:10:18.746 runtime=1 00:10:18.746 ioengine=libaio 00:10:18.746 direct=1 00:10:18.746 bs=4096 00:10:18.746 iodepth=1 00:10:18.746 norandommap=0 00:10:18.746 numjobs=1 00:10:18.746 00:10:18.746 verify_dump=1 00:10:18.746 verify_backlog=512 00:10:18.746 verify_state_save=0 00:10:18.746 do_verify=1 00:10:18.746 verify=crc32c-intel 00:10:18.746 [job0] 00:10:18.746 filename=/dev/nvme0n1 00:10:18.746 Could not set queue depth (nvme0n1) 00:10:19.005 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:19.005 fio-3.35 00:10:19.005 Starting 1 thread 00:10:20.382 00:10:20.382 job0: (groupid=0, jobs=1): err= 0: pid=74868: Sun Dec 15 05:50:41 2024 00:10:20.382 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:20.382 slat (nsec): min=12150, max=53074, avg=14863.14, stdev=4696.10 00:10:20.382 clat (usec): min=127, max=605, avg=172.69, stdev=23.50 00:10:20.382 lat (usec): min=140, max=619, avg=187.55, stdev=24.15 00:10:20.382 clat percentiles (usec): 00:10:20.382 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 153], 00:10:20.382 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 176], 00:10:20.382 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 202], 95.00th=[ 212], 00:10:20.382 | 99.00th=[ 233], 99.50th=[ 241], 99.90th=[ 293], 99.95th=[ 347], 00:10:20.382 | 99.99th=[ 603] 00:10:20.382 write: IOPS=3145, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1001msec); 0 zone resets 00:10:20.382 slat (nsec): min=14691, max=91862, avg=22569.51, stdev=6677.61 00:10:20.382 clat (usec): min=78, max=285, avg=108.71, stdev=20.07 00:10:20.382 lat (usec): min=96, max=353, avg=131.28, stdev=21.98 00:10:20.382 clat percentiles (usec): 00:10:20.382 | 1.00th=[ 83], 5.00th=[ 87], 10.00th=[ 89], 20.00th=[ 93], 00:10:20.382 | 30.00th=[ 96], 40.00th=[ 99], 50.00th=[ 103], 60.00th=[ 110], 00:10:20.382 | 70.00th=[ 117], 80.00th=[ 124], 90.00th=[ 135], 95.00th=[ 147], 00:10:20.382 | 99.00th=[ 176], 99.50th=[ 192], 99.90th=[ 221], 99.95th=[ 262], 00:10:20.382 | 99.99th=[ 285] 00:10:20.382 bw ( KiB/s): min=12288, max=12288, per=97.65%, avg=12288.00, stdev= 0.00, samples=1 00:10:20.382 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:20.382 lat (usec) : 100=21.23%, 250=78.52%, 500=0.23%, 750=0.02% 00:10:20.382 cpu : usr=2.40%, sys=8.90%, ctx=6221, majf=0, minf=5 00:10:20.382 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.382 issued rwts: total=3072,3149,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.382 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.382 00:10:20.382 Run status group 0 (all jobs): 00:10:20.382 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:20.382 WRITE: bw=12.3MiB/s (12.9MB/s), 12.3MiB/s-12.3MiB/s (12.9MB/s-12.9MB/s), io=12.3MiB (12.9MB), run=1001-1001msec 00:10:20.382 00:10:20.382 Disk stats (read/write): 00:10:20.382 nvme0n1: ios=2623/3072, merge=0/0, ticks=497/397, in_queue=894, util=91.28% 00:10:20.382 05:50:41 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:20.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:20.382 05:50:41 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:20.382 05:50:41 -- common/autotest_common.sh@1208 -- # local i=0 00:10:20.382 05:50:41 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:10:20.382 05:50:41 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:20.382 05:50:41 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:10:20.382 05:50:41 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:20.382 05:50:41 -- common/autotest_common.sh@1220 -- # return 0 00:10:20.382 05:50:41 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:20.382 05:50:41 -- target/nmic.sh@53 -- # nvmftestfini 00:10:20.382 05:50:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:20.382 05:50:41 -- nvmf/common.sh@116 -- # sync 00:10:20.382 05:50:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:20.382 05:50:41 -- nvmf/common.sh@119 -- # set +e 00:10:20.382 05:50:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:20.382 05:50:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:20.382 rmmod nvme_tcp 00:10:20.382 rmmod nvme_fabrics 00:10:20.382 rmmod nvme_keyring 00:10:20.382 05:50:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:20.382 05:50:41 -- nvmf/common.sh@123 -- # set -e 00:10:20.382 05:50:41 -- nvmf/common.sh@124 -- # return 0 00:10:20.382 05:50:41 -- nvmf/common.sh@477 -- # '[' -n 74782 ']' 00:10:20.382 05:50:41 -- nvmf/common.sh@478 -- # killprocess 74782 00:10:20.382 05:50:41 -- common/autotest_common.sh@936 -- # '[' -z 74782 ']' 00:10:20.382 05:50:41 -- common/autotest_common.sh@940 -- # kill -0 74782 00:10:20.382 05:50:41 -- common/autotest_common.sh@941 -- # uname 00:10:20.382 05:50:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:20.382 05:50:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74782 00:10:20.382 05:50:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:20.382 killing process with pid 74782 00:10:20.382 05:50:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:20.382 05:50:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74782' 00:10:20.382 05:50:41 -- common/autotest_common.sh@955 -- # kill 74782 00:10:20.382 05:50:41 -- common/autotest_common.sh@960 -- # wait 74782 00:10:20.382 05:50:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:20.382 05:50:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:20.382 05:50:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:20.382 05:50:41 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:20.382 05:50:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:20.382 05:50:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.382 05:50:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:20.382 05:50:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.382 05:50:41 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:20.382 00:10:20.382 real 0m5.786s 00:10:20.382 user 0m18.950s 00:10:20.382 sys 0m2.063s 00:10:20.382 05:50:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:20.382 05:50:42 -- common/autotest_common.sh@10 -- # set +x 00:10:20.382 ************************************ 00:10:20.382 END TEST nvmf_nmic 00:10:20.382 ************************************ 00:10:20.642 05:50:42 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:20.642 05:50:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:20.642 05:50:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:20.642 05:50:42 -- common/autotest_common.sh@10 -- # set +x 00:10:20.642 ************************************ 00:10:20.642 START TEST nvmf_fio_target 00:10:20.642 ************************************ 00:10:20.642 05:50:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:20.642 * Looking for test storage... 00:10:20.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:20.642 05:50:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:20.642 05:50:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:20.642 05:50:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:20.642 05:50:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:20.642 05:50:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:20.642 05:50:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:20.642 05:50:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:20.642 05:50:42 -- scripts/common.sh@335 -- # IFS=.-: 00:10:20.642 05:50:42 -- scripts/common.sh@335 -- # read -ra ver1 00:10:20.642 05:50:42 -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.642 05:50:42 -- scripts/common.sh@336 -- # read -ra ver2 00:10:20.642 05:50:42 -- scripts/common.sh@337 -- # local 'op=<' 00:10:20.642 05:50:42 -- scripts/common.sh@339 -- # ver1_l=2 00:10:20.642 05:50:42 -- scripts/common.sh@340 -- # ver2_l=1 00:10:20.642 05:50:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:20.642 05:50:42 -- scripts/common.sh@343 -- # case "$op" in 00:10:20.642 05:50:42 -- scripts/common.sh@344 -- # : 1 00:10:20.642 05:50:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:20.642 05:50:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.642 05:50:42 -- scripts/common.sh@364 -- # decimal 1 00:10:20.642 05:50:42 -- scripts/common.sh@352 -- # local d=1 00:10:20.642 05:50:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.642 05:50:42 -- scripts/common.sh@354 -- # echo 1 00:10:20.642 05:50:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:20.642 05:50:42 -- scripts/common.sh@365 -- # decimal 2 00:10:20.642 05:50:42 -- scripts/common.sh@352 -- # local d=2 00:10:20.642 05:50:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.642 05:50:42 -- scripts/common.sh@354 -- # echo 2 00:10:20.642 05:50:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:20.642 05:50:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:20.642 05:50:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:20.642 05:50:42 -- scripts/common.sh@367 -- # return 0 00:10:20.642 05:50:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.642 05:50:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:20.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.642 --rc genhtml_branch_coverage=1 00:10:20.642 --rc genhtml_function_coverage=1 00:10:20.642 --rc genhtml_legend=1 00:10:20.642 --rc geninfo_all_blocks=1 00:10:20.642 --rc geninfo_unexecuted_blocks=1 00:10:20.642 00:10:20.642 ' 00:10:20.642 05:50:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:20.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.642 --rc genhtml_branch_coverage=1 00:10:20.642 --rc genhtml_function_coverage=1 00:10:20.642 --rc genhtml_legend=1 00:10:20.642 --rc geninfo_all_blocks=1 00:10:20.642 --rc geninfo_unexecuted_blocks=1 00:10:20.642 00:10:20.642 ' 00:10:20.642 05:50:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:20.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.642 --rc genhtml_branch_coverage=1 00:10:20.642 --rc genhtml_function_coverage=1 00:10:20.642 --rc genhtml_legend=1 00:10:20.642 --rc geninfo_all_blocks=1 00:10:20.642 --rc geninfo_unexecuted_blocks=1 00:10:20.642 00:10:20.642 ' 00:10:20.642 05:50:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:20.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.642 --rc genhtml_branch_coverage=1 00:10:20.642 --rc genhtml_function_coverage=1 00:10:20.642 --rc genhtml_legend=1 00:10:20.642 --rc geninfo_all_blocks=1 00:10:20.642 --rc geninfo_unexecuted_blocks=1 00:10:20.642 00:10:20.642 ' 00:10:20.642 05:50:42 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:20.642 05:50:42 -- nvmf/common.sh@7 -- # uname -s 00:10:20.642 05:50:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.642 05:50:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.642 05:50:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.642 05:50:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.642 05:50:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.642 05:50:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.642 05:50:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.642 05:50:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.642 05:50:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.642 05:50:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.642 05:50:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:10:20.642 05:50:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:10:20.642 05:50:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.642 05:50:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.642 05:50:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:20.642 05:50:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:20.642 05:50:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.642 05:50:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.642 05:50:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.642 05:50:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.642 05:50:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.642 05:50:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.642 05:50:42 -- paths/export.sh@5 -- # export PATH 00:10:20.642 05:50:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.642 05:50:42 -- nvmf/common.sh@46 -- # : 0 00:10:20.642 05:50:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:20.642 05:50:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:20.642 05:50:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:20.642 05:50:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.642 05:50:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.642 05:50:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:20.642 05:50:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:20.642 05:50:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:20.642 05:50:42 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:20.642 05:50:42 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:20.642 05:50:42 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:20.642 05:50:42 -- target/fio.sh@16 -- # nvmftestinit 00:10:20.642 05:50:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:20.642 05:50:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.642 05:50:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:20.642 05:50:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:20.642 05:50:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:20.642 05:50:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.642 05:50:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:20.642 05:50:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.642 05:50:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:20.643 05:50:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:20.643 05:50:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:20.643 05:50:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:20.643 05:50:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:20.643 05:50:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:20.643 05:50:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.643 05:50:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:20.643 05:50:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:20.643 05:50:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:20.643 05:50:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:20.643 05:50:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:20.643 05:50:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:20.643 05:50:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.643 05:50:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:20.643 05:50:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:20.643 05:50:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:20.643 05:50:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:20.643 05:50:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:20.901 05:50:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:20.901 Cannot find device "nvmf_tgt_br" 00:10:20.901 05:50:42 -- nvmf/common.sh@154 -- # true 00:10:20.901 05:50:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:20.901 Cannot find device "nvmf_tgt_br2" 00:10:20.901 05:50:42 -- nvmf/common.sh@155 -- # true 00:10:20.901 05:50:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:20.901 05:50:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:20.901 Cannot find device "nvmf_tgt_br" 00:10:20.901 05:50:42 -- nvmf/common.sh@157 -- # true 00:10:20.901 05:50:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:20.901 Cannot find device "nvmf_tgt_br2" 00:10:20.901 05:50:42 -- nvmf/common.sh@158 -- # true 00:10:20.901 05:50:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:20.901 05:50:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:20.901 05:50:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:20.901 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:20.901 05:50:42 -- nvmf/common.sh@161 -- # true 00:10:20.901 05:50:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:20.901 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:20.901 05:50:42 -- nvmf/common.sh@162 -- # true 00:10:20.901 05:50:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:20.901 05:50:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:20.901 05:50:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:20.901 05:50:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:20.901 05:50:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:20.901 05:50:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:20.901 05:50:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:20.901 05:50:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:20.901 05:50:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:20.901 05:50:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:20.901 05:50:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:20.901 05:50:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:20.901 05:50:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:20.901 05:50:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:20.901 05:50:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:20.901 05:50:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:20.901 05:50:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:20.901 05:50:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:21.254 05:50:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:21.254 05:50:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:21.255 05:50:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:21.255 05:50:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:21.255 05:50:42 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:21.255 05:50:42 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:21.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:10:21.255 00:10:21.255 --- 10.0.0.2 ping statistics --- 00:10:21.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.255 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:10:21.255 05:50:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:21.255 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:21.255 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:10:21.255 00:10:21.255 --- 10.0.0.3 ping statistics --- 00:10:21.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.255 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:21.255 05:50:42 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:21.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:10:21.255 00:10:21.255 --- 10.0.0.1 ping statistics --- 00:10:21.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.255 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:21.255 05:50:42 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.255 05:50:42 -- nvmf/common.sh@421 -- # return 0 00:10:21.255 05:50:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:21.255 05:50:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.255 05:50:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:21.255 05:50:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:21.255 05:50:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.255 05:50:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:21.255 05:50:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:21.255 05:50:42 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:21.255 05:50:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:21.255 05:50:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:21.255 05:50:42 -- common/autotest_common.sh@10 -- # set +x 00:10:21.255 05:50:42 -- nvmf/common.sh@469 -- # nvmfpid=75054 00:10:21.255 05:50:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:21.255 05:50:42 -- nvmf/common.sh@470 -- # waitforlisten 75054 00:10:21.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.255 05:50:42 -- common/autotest_common.sh@829 -- # '[' -z 75054 ']' 00:10:21.255 05:50:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.255 05:50:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:21.255 05:50:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.255 05:50:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:21.255 05:50:42 -- common/autotest_common.sh@10 -- # set +x 00:10:21.255 [2024-12-15 05:50:42.667002] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:21.255 [2024-12-15 05:50:42.667098] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.255 [2024-12-15 05:50:42.802447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:21.255 [2024-12-15 05:50:42.833985] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:21.255 [2024-12-15 05:50:42.834127] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.255 [2024-12-15 05:50:42.834140] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.255 [2024-12-15 05:50:42.834147] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.255 [2024-12-15 05:50:42.834283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.255 [2024-12-15 05:50:42.834906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.255 [2024-12-15 05:50:42.835062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:21.255 [2024-12-15 05:50:42.835082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.513 05:50:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:21.513 05:50:42 -- common/autotest_common.sh@862 -- # return 0 00:10:21.513 05:50:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:21.513 05:50:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:21.513 05:50:42 -- common/autotest_common.sh@10 -- # set +x 00:10:21.513 05:50:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.513 05:50:42 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:21.770 [2024-12-15 05:50:43.243325] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.770 05:50:43 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.028 05:50:43 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:22.028 05:50:43 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.287 05:50:43 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:22.287 05:50:43 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.545 05:50:44 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:22.545 05:50:44 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.804 05:50:44 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:22.804 05:50:44 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:23.062 05:50:44 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.321 05:50:44 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:23.321 05:50:44 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.578 05:50:45 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:23.578 05:50:45 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.835 05:50:45 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:23.835 05:50:45 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:24.093 05:50:45 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:24.352 05:50:45 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:24.352 05:50:45 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:24.610 05:50:46 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:24.610 05:50:46 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:24.868 05:50:46 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:25.127 [2024-12-15 05:50:46.678308] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:25.127 05:50:46 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:25.386 05:50:46 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:25.644 05:50:47 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 --hostid=926ec8f8-6baf-4857-8f2f-72d8639146f9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:25.903 05:50:47 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:25.903 05:50:47 -- common/autotest_common.sh@1187 -- # local i=0 00:10:25.903 05:50:47 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:25.903 05:50:47 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:10:25.903 05:50:47 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:10:25.903 05:50:47 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:27.804 05:50:49 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:27.804 05:50:49 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:27.804 05:50:49 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:10:27.804 05:50:49 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:10:27.804 05:50:49 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:27.804 05:50:49 -- common/autotest_common.sh@1197 -- # return 0 00:10:27.804 05:50:49 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:27.804 [global] 00:10:27.804 thread=1 00:10:27.804 invalidate=1 00:10:27.804 rw=write 00:10:27.804 time_based=1 00:10:27.804 runtime=1 00:10:27.804 ioengine=libaio 00:10:27.804 direct=1 00:10:27.804 bs=4096 00:10:27.805 iodepth=1 00:10:27.805 norandommap=0 00:10:27.805 numjobs=1 00:10:27.805 00:10:27.805 verify_dump=1 00:10:27.805 verify_backlog=512 00:10:27.805 verify_state_save=0 00:10:27.805 do_verify=1 00:10:27.805 verify=crc32c-intel 00:10:27.805 [job0] 00:10:27.805 filename=/dev/nvme0n1 00:10:27.805 [job1] 00:10:27.805 filename=/dev/nvme0n2 00:10:27.805 [job2] 00:10:27.805 filename=/dev/nvme0n3 00:10:27.805 [job3] 00:10:27.805 filename=/dev/nvme0n4 00:10:28.063 Could not set queue depth (nvme0n1) 00:10:28.063 Could not set queue depth (nvme0n2) 00:10:28.063 Could not set queue depth (nvme0n3) 00:10:28.063 Could not set queue depth (nvme0n4) 00:10:28.063 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.063 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.063 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.063 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.063 fio-3.35 00:10:28.063 Starting 4 threads 00:10:29.439 00:10:29.439 job0: (groupid=0, jobs=1): err= 0: pid=75237: Sun Dec 15 05:50:50 2024 00:10:29.439 read: IOPS=2494, BW=9978KiB/s (10.2MB/s)(9988KiB/1001msec) 00:10:29.439 slat (nsec): min=9620, max=55207, avg=16195.73, stdev=4676.42 00:10:29.439 clat (usec): min=127, max=2631, avg=213.46, stdev=99.14 00:10:29.439 lat (usec): min=140, max=2646, avg=229.66, stdev=98.85 00:10:29.439 clat percentiles (usec): 00:10:29.439 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:10:29.439 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 178], 00:10:29.439 | 70.00th=[ 192], 80.00th=[ 310], 90.00th=[ 343], 95.00th=[ 375], 00:10:29.439 | 99.00th=[ 490], 99.50th=[ 515], 99.90th=[ 668], 99.95th=[ 1172], 00:10:29.439 | 99.99th=[ 2638] 00:10:29.439 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:29.439 slat (usec): min=13, max=104, avg=22.86, stdev= 5.42 00:10:29.439 clat (usec): min=90, max=7704, avg=139.98, stdev=154.15 00:10:29.439 lat (usec): min=111, max=7724, avg=162.84, stdev=154.01 00:10:29.439 clat percentiles (usec): 00:10:29.439 | 1.00th=[ 99], 5.00th=[ 105], 10.00th=[ 110], 20.00th=[ 116], 00:10:29.439 | 30.00th=[ 120], 40.00th=[ 124], 50.00th=[ 127], 60.00th=[ 133], 00:10:29.439 | 70.00th=[ 139], 80.00th=[ 149], 90.00th=[ 176], 95.00th=[ 221], 00:10:29.439 | 99.00th=[ 293], 99.50th=[ 314], 99.90th=[ 445], 99.95th=[ 734], 00:10:29.439 | 99.99th=[ 7701] 00:10:29.439 bw ( KiB/s): min=12288, max=12288, per=32.15%, avg=12288.00, stdev= 0.00, samples=1 00:10:29.439 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:29.439 lat (usec) : 100=0.69%, 250=85.29%, 500=13.60%, 750=0.36% 00:10:29.439 lat (msec) : 2=0.02%, 4=0.02%, 10=0.02% 00:10:29.439 cpu : usr=2.60%, sys=7.40%, ctx=5059, majf=0, minf=9 00:10:29.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.439 issued rwts: total=2497,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.439 job1: (groupid=0, jobs=1): err= 0: pid=75238: Sun Dec 15 05:50:50 2024 00:10:29.439 read: IOPS=1582, BW=6330KiB/s (6482kB/s)(6336KiB/1001msec) 00:10:29.439 slat (usec): min=12, max=162, avg=18.20, stdev= 7.25 00:10:29.439 clat (usec): min=154, max=1947, avg=297.00, stdev=74.25 00:10:29.439 lat (usec): min=173, max=1965, avg=315.19, stdev=75.31 00:10:29.439 clat percentiles (usec): 00:10:29.439 | 1.00th=[ 219], 5.00th=[ 235], 10.00th=[ 243], 20.00th=[ 251], 00:10:29.439 | 30.00th=[ 260], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 289], 00:10:29.439 | 70.00th=[ 310], 80.00th=[ 338], 90.00th=[ 392], 95.00th=[ 420], 00:10:29.439 | 99.00th=[ 490], 99.50th=[ 506], 99.90th=[ 898], 99.95th=[ 1942], 00:10:29.439 | 99.99th=[ 1942] 00:10:29.439 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:29.439 slat (usec): min=12, max=245, avg=25.37, stdev= 9.85 00:10:29.439 clat (usec): min=96, max=1771, avg=215.25, stdev=60.08 00:10:29.439 lat (usec): min=120, max=1794, avg=240.62, stdev=62.06 00:10:29.439 clat percentiles (usec): 00:10:29.439 | 1.00th=[ 123], 5.00th=[ 143], 10.00th=[ 167], 20.00th=[ 186], 00:10:29.439 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 208], 60.00th=[ 217], 00:10:29.439 | 70.00th=[ 223], 80.00th=[ 237], 90.00th=[ 289], 95.00th=[ 314], 00:10:29.439 | 99.00th=[ 347], 99.50th=[ 375], 99.90th=[ 465], 99.95th=[ 988], 00:10:29.439 | 99.99th=[ 1778] 00:10:29.439 bw ( KiB/s): min= 8192, max= 8192, per=21.43%, avg=8192.00, stdev= 0.00, samples=1 00:10:29.439 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:29.439 lat (usec) : 100=0.03%, 250=56.06%, 500=43.61%, 750=0.19%, 1000=0.06% 00:10:29.439 lat (msec) : 2=0.06% 00:10:29.439 cpu : usr=2.00%, sys=6.10%, ctx=3634, majf=0, minf=7 00:10:29.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.439 issued rwts: total=1584,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.439 job2: (groupid=0, jobs=1): err= 0: pid=75239: Sun Dec 15 05:50:50 2024 00:10:29.439 read: IOPS=2600, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec) 00:10:29.439 slat (nsec): min=12018, max=78423, avg=15223.99, stdev=3837.93 00:10:29.439 clat (usec): min=131, max=1101, avg=177.56, stdev=25.56 00:10:29.439 lat (usec): min=145, max=1118, avg=192.79, stdev=25.83 00:10:29.439 clat percentiles (usec): 00:10:29.439 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 163], 00:10:29.439 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:10:29.439 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 208], 00:10:29.439 | 99.00th=[ 223], 99.50th=[ 231], 99.90th=[ 265], 99.95th=[ 474], 00:10:29.439 | 99.99th=[ 1106] 00:10:29.439 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:29.439 slat (usec): min=14, max=105, avg=22.54, stdev= 6.52 00:10:29.439 clat (usec): min=94, max=326, avg=136.47, stdev=16.25 00:10:29.439 lat (usec): min=115, max=357, avg=159.01, stdev=17.41 00:10:29.439 clat percentiles (usec): 00:10:29.439 | 1.00th=[ 106], 5.00th=[ 115], 10.00th=[ 119], 20.00th=[ 124], 00:10:29.439 | 30.00th=[ 128], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 139], 00:10:29.439 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 157], 95.00th=[ 165], 00:10:29.439 | 99.00th=[ 182], 99.50th=[ 186], 99.90th=[ 200], 99.95th=[ 251], 00:10:29.439 | 99.99th=[ 326] 00:10:29.439 bw ( KiB/s): min=12288, max=12288, per=32.15%, avg=12288.00, stdev= 0.00, samples=1 00:10:29.439 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:29.439 lat (usec) : 100=0.11%, 250=99.81%, 500=0.07% 00:10:29.439 lat (msec) : 2=0.02% 00:10:29.439 cpu : usr=2.80%, sys=7.90%, ctx=5676, majf=0, minf=3 00:10:29.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.439 issued rwts: total=2603,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.440 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.440 job3: (groupid=0, jobs=1): err= 0: pid=75240: Sun Dec 15 05:50:50 2024 00:10:29.440 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:29.440 slat (nsec): min=13343, max=82279, avg=22226.14, stdev=10272.72 00:10:29.440 clat (usec): min=164, max=3185, avg=323.76, stdev=131.43 00:10:29.440 lat (usec): min=184, max=3215, avg=345.98, stdev=137.45 00:10:29.440 clat percentiles (usec): 00:10:29.440 | 1.00th=[ 223], 5.00th=[ 235], 10.00th=[ 243], 20.00th=[ 251], 00:10:29.440 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 289], 00:10:29.440 | 70.00th=[ 310], 80.00th=[ 363], 90.00th=[ 529], 95.00th=[ 586], 00:10:29.440 | 99.00th=[ 644], 99.50th=[ 660], 99.90th=[ 668], 99.95th=[ 3195], 00:10:29.440 | 99.99th=[ 3195] 00:10:29.440 write: IOPS=1884, BW=7536KiB/s (7717kB/s)(7544KiB/1001msec); 0 zone resets 00:10:29.440 slat (nsec): min=19516, max=98905, avg=28479.75, stdev=9108.98 00:10:29.440 clat (usec): min=112, max=621, avg=215.57, stdev=40.78 00:10:29.440 lat (usec): min=135, max=642, avg=244.05, stdev=45.32 00:10:29.440 clat percentiles (usec): 00:10:29.440 | 1.00th=[ 129], 5.00th=[ 163], 10.00th=[ 176], 20.00th=[ 188], 00:10:29.440 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 210], 60.00th=[ 217], 00:10:29.440 | 70.00th=[ 227], 80.00th=[ 241], 90.00th=[ 273], 95.00th=[ 293], 00:10:29.440 | 99.00th=[ 330], 99.50th=[ 371], 99.90th=[ 433], 99.95th=[ 619], 00:10:29.440 | 99.99th=[ 619] 00:10:29.440 bw ( KiB/s): min= 8192, max= 8192, per=21.43%, avg=8192.00, stdev= 0.00, samples=1 00:10:29.440 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:29.440 lat (usec) : 250=54.68%, 500=39.80%, 750=5.49% 00:10:29.440 lat (msec) : 4=0.03% 00:10:29.440 cpu : usr=1.60%, sys=7.20%, ctx=3423, majf=0, minf=19 00:10:29.440 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.440 issued rwts: total=1536,1886,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.440 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.440 00:10:29.440 Run status group 0 (all jobs): 00:10:29.440 READ: bw=32.1MiB/s (33.6MB/s), 6138KiB/s-10.2MiB/s (6285kB/s-10.7MB/s), io=32.1MiB (33.7MB), run=1001-1001msec 00:10:29.440 WRITE: bw=37.3MiB/s (39.1MB/s), 7536KiB/s-12.0MiB/s (7717kB/s-12.6MB/s), io=37.4MiB (39.2MB), run=1001-1001msec 00:10:29.440 00:10:29.440 Disk stats (read/write): 00:10:29.440 nvme0n1: ios=2158/2560, merge=0/0, ticks=398/379, in_queue=777, util=86.59% 00:10:29.440 nvme0n2: ios=1584/1601, merge=0/0, ticks=493/345, in_queue=838, util=88.66% 00:10:29.440 nvme0n3: ios=2287/2560, merge=0/0, ticks=419/380, in_queue=799, util=89.19% 00:10:29.440 nvme0n4: ios=1473/1536, merge=0/0, ticks=476/329, in_queue=805, util=89.54% 00:10:29.440 05:50:50 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:29.440 [global] 00:10:29.440 thread=1 00:10:29.440 invalidate=1 00:10:29.440 rw=randwrite 00:10:29.440 time_based=1 00:10:29.440 runtime=1 00:10:29.440 ioengine=libaio 00:10:29.440 direct=1 00:10:29.440 bs=4096 00:10:29.440 iodepth=1 00:10:29.440 norandommap=0 00:10:29.440 numjobs=1 00:10:29.440 00:10:29.440 verify_dump=1 00:10:29.440 verify_backlog=512 00:10:29.440 verify_state_save=0 00:10:29.440 do_verify=1 00:10:29.440 verify=crc32c-intel 00:10:29.440 [job0] 00:10:29.440 filename=/dev/nvme0n1 00:10:29.440 [job1] 00:10:29.440 filename=/dev/nvme0n2 00:10:29.440 [job2] 00:10:29.440 filename=/dev/nvme0n3 00:10:29.440 [job3] 00:10:29.440 filename=/dev/nvme0n4 00:10:29.440 Could not set queue depth (nvme0n1) 00:10:29.440 Could not set queue depth (nvme0n2) 00:10:29.440 Could not set queue depth (nvme0n3) 00:10:29.440 Could not set queue depth (nvme0n4) 00:10:29.440 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.440 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.440 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.440 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.440 fio-3.35 00:10:29.440 Starting 4 threads 00:10:30.844 00:10:30.844 job0: (groupid=0, jobs=1): err= 0: pid=75299: Sun Dec 15 05:50:52 2024 00:10:30.844 read: IOPS=3028, BW=11.8MiB/s (12.4MB/s)(11.8MiB/1001msec) 00:10:30.844 slat (usec): min=12, max=102, avg=15.73, stdev= 5.06 00:10:30.844 clat (usec): min=75, max=515, avg=162.60, stdev=17.34 00:10:30.844 lat (usec): min=137, max=529, avg=178.34, stdev=17.93 00:10:30.844 clat percentiles (usec): 00:10:30.844 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:10:30.844 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:10:30.844 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 192], 00:10:30.844 | 99.00th=[ 208], 99.50th=[ 217], 99.90th=[ 243], 99.95th=[ 289], 00:10:30.844 | 99.99th=[ 515] 00:10:30.844 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:30.844 slat (usec): min=15, max=141, avg=23.81, stdev= 7.14 00:10:30.844 clat (usec): min=88, max=244, avg=121.72, stdev=14.53 00:10:30.844 lat (usec): min=112, max=344, avg=145.54, stdev=16.54 00:10:30.844 clat percentiles (usec): 00:10:30.844 | 1.00th=[ 97], 5.00th=[ 102], 10.00th=[ 105], 20.00th=[ 110], 00:10:30.844 | 30.00th=[ 114], 40.00th=[ 117], 50.00th=[ 120], 60.00th=[ 124], 00:10:30.844 | 70.00th=[ 128], 80.00th=[ 135], 90.00th=[ 143], 95.00th=[ 149], 00:10:30.844 | 99.00th=[ 161], 99.50th=[ 172], 99.90th=[ 182], 99.95th=[ 231], 00:10:30.844 | 99.99th=[ 245] 00:10:30.844 bw ( KiB/s): min=12263, max=12263, per=29.34%, avg=12263.00, stdev= 0.00, samples=1 00:10:30.844 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:10:30.844 lat (usec) : 100=1.36%, 250=98.59%, 500=0.03%, 750=0.02% 00:10:30.844 cpu : usr=2.20%, sys=10.00%, ctx=6113, majf=0, minf=13 00:10:30.844 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.844 issued rwts: total=3032,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.844 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.844 job1: (groupid=0, jobs=1): err= 0: pid=75300: Sun Dec 15 05:50:52 2024 00:10:30.844 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:30.844 slat (nsec): min=8666, max=44390, avg=13564.43, stdev=4526.73 00:10:30.844 clat (usec): min=194, max=312, avg=239.53, stdev=18.67 00:10:30.844 lat (usec): min=207, max=346, avg=253.10, stdev=19.32 00:10:30.844 clat percentiles (usec): 00:10:30.844 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 223], 00:10:30.844 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 243], 00:10:30.844 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 273], 00:10:30.844 | 99.00th=[ 289], 99.50th=[ 293], 99.90th=[ 310], 99.95th=[ 310], 00:10:30.844 | 99.99th=[ 314] 00:10:30.844 write: IOPS=2156, BW=8627KiB/s (8834kB/s)(8636KiB/1001msec); 0 zone resets 00:10:30.844 slat (nsec): min=10791, max=53702, avg=20148.82, stdev=5416.38 00:10:30.844 clat (usec): min=121, max=1634, avg=199.52, stdev=36.10 00:10:30.844 lat (usec): min=147, max=1650, avg=219.67, stdev=36.27 00:10:30.844 clat percentiles (usec): 00:10:30.844 | 1.00th=[ 163], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 184], 00:10:30.844 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 202], 00:10:30.844 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 225], 95.00th=[ 233], 00:10:30.844 | 99.00th=[ 247], 99.50th=[ 255], 99.90th=[ 273], 99.95th=[ 277], 00:10:30.844 | 99.99th=[ 1631] 00:10:30.844 bw ( KiB/s): min= 8720, max= 8720, per=20.86%, avg=8720.00, stdev= 0.00, samples=1 00:10:30.844 iops : min= 2180, max= 2180, avg=2180.00, stdev= 0.00, samples=1 00:10:30.844 lat (usec) : 250=86.43%, 500=13.55% 00:10:30.844 lat (msec) : 2=0.02% 00:10:30.844 cpu : usr=1.90%, sys=6.00%, ctx=4207, majf=0, minf=15 00:10:30.844 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.844 issued rwts: total=2048,2159,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.844 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.844 job2: (groupid=0, jobs=1): err= 0: pid=75301: Sun Dec 15 05:50:52 2024 00:10:30.844 read: IOPS=2677, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec) 00:10:30.844 slat (nsec): min=12929, max=47364, avg=16330.11, stdev=3634.26 00:10:30.844 clat (usec): min=136, max=2566, avg=174.56, stdev=49.44 00:10:30.844 lat (usec): min=153, max=2583, avg=190.89, stdev=49.51 00:10:30.844 clat percentiles (usec): 00:10:30.844 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:10:30.844 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:10:30.844 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 204], 00:10:30.844 | 99.00th=[ 221], 99.50th=[ 229], 99.90th=[ 277], 99.95th=[ 474], 00:10:30.844 | 99.99th=[ 2573] 00:10:30.844 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:30.844 slat (usec): min=18, max=121, avg=23.02, stdev= 5.46 00:10:30.844 clat (usec): min=97, max=270, avg=132.24, stdev=16.41 00:10:30.844 lat (usec): min=117, max=392, avg=155.26, stdev=18.18 00:10:30.844 clat percentiles (usec): 00:10:30.844 | 1.00th=[ 106], 5.00th=[ 111], 10.00th=[ 114], 20.00th=[ 119], 00:10:30.844 | 30.00th=[ 123], 40.00th=[ 127], 50.00th=[ 130], 60.00th=[ 135], 00:10:30.844 | 70.00th=[ 139], 80.00th=[ 145], 90.00th=[ 155], 95.00th=[ 163], 00:10:30.844 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 208], 99.95th=[ 253], 00:10:30.844 | 99.99th=[ 269] 00:10:30.844 bw ( KiB/s): min=12263, max=12263, per=29.34%, avg=12263.00, stdev= 0.00, samples=1 00:10:30.844 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:10:30.844 lat (usec) : 100=0.02%, 250=99.88%, 500=0.09% 00:10:30.844 lat (msec) : 4=0.02% 00:10:30.844 cpu : usr=2.60%, sys=8.70%, ctx=5754, majf=0, minf=9 00:10:30.844 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.844 issued rwts: total=2680,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.844 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.844 job3: (groupid=0, jobs=1): err= 0: pid=75302: Sun Dec 15 05:50:52 2024 00:10:30.844 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:30.845 slat (nsec): min=8471, max=64007, avg=11941.19, stdev=4370.82 00:10:30.845 clat (usec): min=199, max=339, avg=241.47, stdev=18.95 00:10:30.845 lat (usec): min=210, max=362, avg=253.41, stdev=19.56 00:10:30.845 clat percentiles (usec): 00:10:30.845 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 225], 00:10:30.845 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 245], 00:10:30.845 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 277], 00:10:30.845 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 314], 99.95th=[ 318], 00:10:30.845 | 99.99th=[ 338] 00:10:30.845 write: IOPS=2155, BW=8623KiB/s (8830kB/s)(8632KiB/1001msec); 0 zone resets 00:10:30.845 slat (usec): min=10, max=122, avg=17.08, stdev= 5.36 00:10:30.845 clat (usec): min=155, max=1722, avg=203.01, stdev=37.95 00:10:30.845 lat (usec): min=171, max=1743, avg=220.09, stdev=38.27 00:10:30.845 clat percentiles (usec): 00:10:30.845 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 186], 00:10:30.845 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 206], 00:10:30.845 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 229], 95.00th=[ 237], 00:10:30.845 | 99.00th=[ 255], 99.50th=[ 262], 99.90th=[ 277], 99.95th=[ 285], 00:10:30.845 | 99.99th=[ 1729] 00:10:30.845 bw ( KiB/s): min= 8702, max= 8702, per=20.82%, avg=8702.00, stdev= 0.00, samples=1 00:10:30.845 iops : min= 2175, max= 2175, avg=2175.00, stdev= 0.00, samples=1 00:10:30.845 lat (usec) : 250=84.43%, 500=15.55% 00:10:30.845 lat (msec) : 2=0.02% 00:10:30.845 cpu : usr=1.20%, sys=5.40%, ctx=4207, majf=0, minf=9 00:10:30.845 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.845 issued rwts: total=2048,2158,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.845 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.845 00:10:30.845 Run status group 0 (all jobs): 00:10:30.845 READ: bw=38.3MiB/s (40.1MB/s), 8184KiB/s-11.8MiB/s (8380kB/s-12.4MB/s), io=38.3MiB (40.2MB), run=1001-1001msec 00:10:30.845 WRITE: bw=40.8MiB/s (42.8MB/s), 8623KiB/s-12.0MiB/s (8830kB/s-12.6MB/s), io=40.9MiB (42.8MB), run=1001-1001msec 00:10:30.845 00:10:30.845 Disk stats (read/write): 00:10:30.845 nvme0n1: ios=2610/2750, merge=0/0, ticks=448/352, in_queue=800, util=88.38% 00:10:30.845 nvme0n2: ios=1680/2048, merge=0/0, ticks=401/399, in_queue=800, util=89.61% 00:10:30.845 nvme0n3: ios=2412/2560, merge=0/0, ticks=441/361, in_queue=802, util=89.14% 00:10:30.845 nvme0n4: ios=1631/2048, merge=0/0, ticks=374/367, in_queue=741, util=89.78% 00:10:30.845 05:50:52 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:30.845 [global] 00:10:30.845 thread=1 00:10:30.845 invalidate=1 00:10:30.845 rw=write 00:10:30.845 time_based=1 00:10:30.845 runtime=1 00:10:30.845 ioengine=libaio 00:10:30.845 direct=1 00:10:30.845 bs=4096 00:10:30.845 iodepth=128 00:10:30.845 norandommap=0 00:10:30.845 numjobs=1 00:10:30.845 00:10:30.845 verify_dump=1 00:10:30.845 verify_backlog=512 00:10:30.845 verify_state_save=0 00:10:30.845 do_verify=1 00:10:30.845 verify=crc32c-intel 00:10:30.845 [job0] 00:10:30.845 filename=/dev/nvme0n1 00:10:30.845 [job1] 00:10:30.845 filename=/dev/nvme0n2 00:10:30.845 [job2] 00:10:30.845 filename=/dev/nvme0n3 00:10:30.845 [job3] 00:10:30.845 filename=/dev/nvme0n4 00:10:30.845 Could not set queue depth (nvme0n1) 00:10:30.845 Could not set queue depth (nvme0n2) 00:10:30.845 Could not set queue depth (nvme0n3) 00:10:30.845 Could not set queue depth (nvme0n4) 00:10:30.845 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:30.845 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:30.845 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:30.845 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:30.845 fio-3.35 00:10:30.845 Starting 4 threads 00:10:32.220 00:10:32.220 job0: (groupid=0, jobs=1): err= 0: pid=75355: Sun Dec 15 05:50:53 2024 00:10:32.220 read: IOPS=2840, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1003msec) 00:10:32.220 slat (usec): min=4, max=10094, avg=174.95, stdev=890.43 00:10:32.220 clat (usec): min=554, max=27812, avg=21203.95, stdev=2813.59 00:10:32.220 lat (usec): min=5558, max=27852, avg=21378.90, stdev=2706.17 00:10:32.220 clat percentiles (usec): 00:10:32.220 | 1.00th=[ 6194], 5.00th=[16909], 10.00th=[18482], 20.00th=[19792], 00:10:32.220 | 30.00th=[21103], 40.00th=[21365], 50.00th=[21890], 60.00th=[21890], 00:10:32.220 | 70.00th=[22152], 80.00th=[22414], 90.00th=[23200], 95.00th=[24511], 00:10:32.220 | 99.00th=[27657], 99.50th=[27657], 99.90th=[27657], 99.95th=[27657], 00:10:32.220 | 99.99th=[27919] 00:10:32.220 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:10:32.220 slat (usec): min=14, max=5054, avg=155.16, stdev=728.73 00:10:32.220 clat (usec): min=11924, max=28344, avg=21506.37, stdev=2361.41 00:10:32.220 lat (usec): min=14555, max=28370, avg=21661.53, stdev=2232.09 00:10:32.220 clat percentiles (usec): 00:10:32.220 | 1.00th=[16057], 5.00th=[17433], 10.00th=[18744], 20.00th=[20055], 00:10:32.220 | 30.00th=[20579], 40.00th=[21103], 50.00th=[21365], 60.00th=[21890], 00:10:32.220 | 70.00th=[22414], 80.00th=[22676], 90.00th=[24511], 95.00th=[26346], 00:10:32.220 | 99.00th=[27919], 99.50th=[28181], 99.90th=[28443], 99.95th=[28443], 00:10:32.220 | 99.99th=[28443] 00:10:32.220 bw ( KiB/s): min=12288, max=12288, per=18.81%, avg=12288.00, stdev= 0.00, samples=2 00:10:32.220 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:10:32.220 lat (usec) : 750=0.02% 00:10:32.220 lat (msec) : 10=0.54%, 20=20.03%, 50=79.41% 00:10:32.220 cpu : usr=3.19%, sys=9.38%, ctx=187, majf=0, minf=13 00:10:32.220 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:10:32.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.220 issued rwts: total=2849,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.220 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.220 job1: (groupid=0, jobs=1): err= 0: pid=75356: Sun Dec 15 05:50:53 2024 00:10:32.220 read: IOPS=2840, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1003msec) 00:10:32.220 slat (usec): min=7, max=6136, avg=163.78, stdev=819.43 00:10:32.220 clat (usec): min=266, max=25179, avg=21343.15, stdev=2853.34 00:10:32.220 lat (usec): min=2871, max=25207, avg=21506.93, stdev=2736.53 00:10:32.220 clat percentiles (usec): 00:10:32.220 | 1.00th=[ 3425], 5.00th=[17171], 10.00th=[20055], 20.00th=[21103], 00:10:32.220 | 30.00th=[21365], 40.00th=[21627], 50.00th=[21890], 60.00th=[21890], 00:10:32.220 | 70.00th=[22152], 80.00th=[22414], 90.00th=[23462], 95.00th=[23987], 00:10:32.220 | 99.00th=[25035], 99.50th=[25035], 99.90th=[25035], 99.95th=[25035], 00:10:32.220 | 99.99th=[25297] 00:10:32.220 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:10:32.220 slat (usec): min=12, max=7342, avg=165.54, stdev=782.20 00:10:32.220 clat (usec): min=13485, max=27071, avg=21213.12, stdev=1793.71 00:10:32.220 lat (usec): min=17494, max=27095, avg=21378.65, stdev=1633.66 00:10:32.220 clat percentiles (usec): 00:10:32.220 | 1.00th=[16581], 5.00th=[18220], 10.00th=[19006], 20.00th=[20317], 00:10:32.220 | 30.00th=[20579], 40.00th=[21103], 50.00th=[21365], 60.00th=[21627], 00:10:32.220 | 70.00th=[21890], 80.00th=[22152], 90.00th=[22676], 95.00th=[24511], 00:10:32.220 | 99.00th=[27132], 99.50th=[27132], 99.90th=[27132], 99.95th=[27132], 00:10:32.220 | 99.99th=[27132] 00:10:32.220 bw ( KiB/s): min=12288, max=12288, per=18.81%, avg=12288.00, stdev= 0.00, samples=2 00:10:32.220 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:10:32.220 lat (usec) : 500=0.02% 00:10:32.220 lat (msec) : 4=0.54%, 10=0.54%, 20=12.75%, 50=86.15% 00:10:32.220 cpu : usr=2.89%, sys=9.78%, ctx=186, majf=0, minf=7 00:10:32.220 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:10:32.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.220 issued rwts: total=2849,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.220 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.220 job2: (groupid=0, jobs=1): err= 0: pid=75357: Sun Dec 15 05:50:53 2024 00:10:32.220 read: IOPS=4627, BW=18.1MiB/s (19.0MB/s)(18.1MiB/1003msec) 00:10:32.220 slat (usec): min=3, max=5608, avg=96.44, stdev=455.15 00:10:32.220 clat (usec): min=281, max=16403, avg=12771.76, stdev=1187.76 00:10:32.220 lat (usec): min=2975, max=16418, avg=12868.20, stdev=1099.29 00:10:32.220 clat percentiles (usec): 00:10:32.220 | 1.00th=[ 9634], 5.00th=[11863], 10.00th=[11994], 20.00th=[12256], 00:10:32.220 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:10:32.220 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13829], 95.00th=[14091], 00:10:32.220 | 99.00th=[15926], 99.50th=[16319], 99.90th=[16319], 99.95th=[16450], 00:10:32.220 | 99.99th=[16450] 00:10:32.220 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:10:32.220 slat (usec): min=10, max=6453, avg=100.80, stdev=441.27 00:10:32.220 clat (usec): min=5886, max=18934, avg=13159.69, stdev=1287.12 00:10:32.220 lat (usec): min=5907, max=18964, avg=13260.50, stdev=1218.07 00:10:32.220 clat percentiles (usec): 00:10:32.220 | 1.00th=[ 9372], 5.00th=[11600], 10.00th=[12256], 20.00th=[12649], 00:10:32.220 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13042], 60.00th=[13304], 00:10:32.220 | 70.00th=[13435], 80.00th=[13698], 90.00th=[14091], 95.00th=[15270], 00:10:32.220 | 99.00th=[17433], 99.50th=[18744], 99.90th=[19006], 99.95th=[19006], 00:10:32.220 | 99.99th=[19006] 00:10:32.220 bw ( KiB/s): min=19720, max=20480, per=30.76%, avg=20100.00, stdev=537.40, samples=2 00:10:32.220 iops : min= 4930, max= 5120, avg=5025.00, stdev=134.35, samples=2 00:10:32.220 lat (usec) : 500=0.01% 00:10:32.220 lat (msec) : 4=0.33%, 10=1.19%, 20=98.47% 00:10:32.220 cpu : usr=4.19%, sys=13.87%, ctx=312, majf=0, minf=8 00:10:32.220 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:32.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.220 issued rwts: total=4641,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.220 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.220 job3: (groupid=0, jobs=1): err= 0: pid=75358: Sun Dec 15 05:50:53 2024 00:10:32.220 read: IOPS=4721, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1003msec) 00:10:32.220 slat (usec): min=4, max=3116, avg=95.38, stdev=446.07 00:10:32.220 clat (usec): min=2615, max=14755, avg=12695.43, stdev=1302.69 00:10:32.220 lat (usec): min=2626, max=14791, avg=12790.81, stdev=1230.74 00:10:32.220 clat percentiles (usec): 00:10:32.220 | 1.00th=[ 5800], 5.00th=[11469], 10.00th=[11863], 20.00th=[12256], 00:10:32.220 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[13042], 00:10:32.220 | 70.00th=[13173], 80.00th=[13566], 90.00th=[13829], 95.00th=[13960], 00:10:32.220 | 99.00th=[14353], 99.50th=[14484], 99.90th=[14615], 99.95th=[14746], 00:10:32.220 | 99.99th=[14746] 00:10:32.220 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:10:32.220 slat (usec): min=12, max=3217, avg=99.08, stdev=417.60 00:10:32.220 clat (usec): min=9485, max=14354, avg=12999.66, stdev=669.21 00:10:32.220 lat (usec): min=10411, max=14703, avg=13098.74, stdev=526.78 00:10:32.220 clat percentiles (usec): 00:10:32.220 | 1.00th=[10421], 5.00th=[11994], 10.00th=[12387], 20.00th=[12518], 00:10:32.220 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:10:32.220 | 70.00th=[13304], 80.00th=[13566], 90.00th=[13829], 95.00th=[13960], 00:10:32.220 | 99.00th=[14222], 99.50th=[14353], 99.90th=[14353], 99.95th=[14353], 00:10:32.220 | 99.99th=[14353] 00:10:32.220 bw ( KiB/s): min=20480, max=20521, per=31.37%, avg=20500.50, stdev=28.99, samples=2 00:10:32.220 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:10:32.220 lat (msec) : 4=0.32%, 10=1.13%, 20=98.55% 00:10:32.220 cpu : usr=4.99%, sys=14.27%, ctx=308, majf=0, minf=5 00:10:32.220 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:32.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.220 issued rwts: total=4736,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.220 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.220 00:10:32.220 Run status group 0 (all jobs): 00:10:32.220 READ: bw=58.7MiB/s (61.6MB/s), 11.1MiB/s-18.4MiB/s (11.6MB/s-19.3MB/s), io=58.9MiB (61.7MB), run=1003-1003msec 00:10:32.220 WRITE: bw=63.8MiB/s (66.9MB/s), 12.0MiB/s-19.9MiB/s (12.5MB/s-20.9MB/s), io=64.0MiB (67.1MB), run=1003-1003msec 00:10:32.220 00:10:32.220 Disk stats (read/write): 00:10:32.220 nvme0n1: ios=2610/2592, merge=0/0, ticks=13549/11624, in_queue=25173, util=89.08% 00:10:32.220 nvme0n2: ios=2609/2560, merge=0/0, ticks=13029/12149, in_queue=25178, util=89.51% 00:10:32.220 nvme0n3: ios=4096/4320, merge=0/0, ticks=11703/12360, in_queue=24063, util=88.93% 00:10:32.220 nvme0n4: ios=4096/4384, merge=0/0, ticks=11696/12161, in_queue=23857, util=89.70% 00:10:32.220 05:50:53 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:32.220 [global] 00:10:32.220 thread=1 00:10:32.220 invalidate=1 00:10:32.220 rw=randwrite 00:10:32.220 time_based=1 00:10:32.220 runtime=1 00:10:32.220 ioengine=libaio 00:10:32.220 direct=1 00:10:32.220 bs=4096 00:10:32.220 iodepth=128 00:10:32.220 norandommap=0 00:10:32.220 numjobs=1 00:10:32.220 00:10:32.220 verify_dump=1 00:10:32.220 verify_backlog=512 00:10:32.220 verify_state_save=0 00:10:32.220 do_verify=1 00:10:32.220 verify=crc32c-intel 00:10:32.220 [job0] 00:10:32.220 filename=/dev/nvme0n1 00:10:32.220 [job1] 00:10:32.220 filename=/dev/nvme0n2 00:10:32.220 [job2] 00:10:32.220 filename=/dev/nvme0n3 00:10:32.220 [job3] 00:10:32.220 filename=/dev/nvme0n4 00:10:32.220 Could not set queue depth (nvme0n1) 00:10:32.221 Could not set queue depth (nvme0n2) 00:10:32.221 Could not set queue depth (nvme0n3) 00:10:32.221 Could not set queue depth (nvme0n4) 00:10:32.221 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.221 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.221 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.221 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.221 fio-3.35 00:10:32.221 Starting 4 threads 00:10:33.595 00:10:33.595 job0: (groupid=0, jobs=1): err= 0: pid=75418: Sun Dec 15 05:50:54 2024 00:10:33.595 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:10:33.595 slat (usec): min=7, max=2568, avg=80.74, stdev=370.42 00:10:33.595 clat (usec): min=8039, max=12852, avg=10906.44, stdev=515.18 00:10:33.595 lat (usec): min=9736, max=12956, avg=10987.18, stdev=368.48 00:10:33.595 clat percentiles (usec): 00:10:33.595 | 1.00th=[ 8717], 5.00th=[10159], 10.00th=[10421], 20.00th=[10683], 00:10:33.595 | 30.00th=[10814], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:10:33.595 | 70.00th=[11076], 80.00th=[11207], 90.00th=[11469], 95.00th=[11600], 00:10:33.595 | 99.00th=[11863], 99.50th=[11863], 99.90th=[11994], 99.95th=[11994], 00:10:33.595 | 99.99th=[12911] 00:10:33.595 write: IOPS=5905, BW=23.1MiB/s (24.2MB/s)(23.1MiB/1002msec); 0 zone resets 00:10:33.595 slat (usec): min=10, max=2503, avg=84.42, stdev=347.67 00:10:33.595 clat (usec): min=1585, max=13410, avg=11023.16, stdev=994.31 00:10:33.595 lat (usec): min=1604, max=13800, avg=11107.58, stdev=943.16 00:10:33.595 clat percentiles (usec): 00:10:33.595 | 1.00th=[ 6915], 5.00th=[10028], 10.00th=[10552], 20.00th=[10814], 00:10:33.595 | 30.00th=[10945], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:10:33.595 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11731], 95.00th=[11863], 00:10:33.595 | 99.00th=[11994], 99.50th=[12125], 99.90th=[12780], 99.95th=[13173], 00:10:33.595 | 99.99th=[13435] 00:10:33.595 bw ( KiB/s): min=24328, max=24328, per=36.33%, avg=24328.00, stdev= 0.00, samples=1 00:10:33.595 iops : min= 6082, max= 6082, avg=6082.00, stdev= 0.00, samples=1 00:10:33.595 lat (msec) : 2=0.16%, 4=0.07%, 10=3.93%, 20=95.84% 00:10:33.595 cpu : usr=5.29%, sys=15.48%, ctx=372, majf=0, minf=1 00:10:33.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:33.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.595 issued rwts: total=5632,5917,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.595 job1: (groupid=0, jobs=1): err= 0: pid=75419: Sun Dec 15 05:50:54 2024 00:10:33.595 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:10:33.595 slat (usec): min=5, max=9063, avg=181.95, stdev=938.59 00:10:33.595 clat (usec): min=14263, max=29314, avg=22749.76, stdev=1844.07 00:10:33.595 lat (usec): min=18031, max=29330, avg=22931.71, stdev=1651.35 00:10:33.595 clat percentiles (usec): 00:10:33.595 | 1.00th=[17695], 5.00th=[19006], 10.00th=[20579], 20.00th=[21103], 00:10:33.595 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:10:33.595 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24249], 95.00th=[25822], 00:10:33.595 | 99.00th=[29230], 99.50th=[29230], 99.90th=[29230], 99.95th=[29230], 00:10:33.595 | 99.99th=[29230] 00:10:33.595 write: IOPS=2972, BW=11.6MiB/s (12.2MB/s)(11.6MiB/1002msec); 0 zone resets 00:10:33.595 slat (usec): min=17, max=5648, avg=170.83, stdev=819.70 00:10:33.595 clat (usec): min=1015, max=30105, avg=22824.49, stdev=3563.59 00:10:33.595 lat (usec): min=1065, max=30130, avg=22995.32, stdev=3463.86 00:10:33.595 clat percentiles (usec): 00:10:33.595 | 1.00th=[ 6849], 5.00th=[16909], 10.00th=[17695], 20.00th=[20579], 00:10:33.595 | 30.00th=[22676], 40.00th=[23200], 50.00th=[23725], 60.00th=[23987], 00:10:33.595 | 70.00th=[24249], 80.00th=[24773], 90.00th=[26608], 95.00th=[27132], 00:10:33.595 | 99.00th=[30016], 99.50th=[30016], 99.90th=[30016], 99.95th=[30016], 00:10:33.595 | 99.99th=[30016] 00:10:33.595 bw ( KiB/s): min=12288, max=12288, per=18.35%, avg=12288.00, stdev= 0.00, samples=1 00:10:33.595 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:33.595 lat (msec) : 2=0.04%, 10=0.58%, 20=10.64%, 50=88.75% 00:10:33.595 cpu : usr=2.80%, sys=8.99%, ctx=176, majf=0, minf=10 00:10:33.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:33.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.595 issued rwts: total=2560,2978,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.595 job2: (groupid=0, jobs=1): err= 0: pid=75420: Sun Dec 15 05:50:54 2024 00:10:33.595 read: IOPS=4695, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1002msec) 00:10:33.595 slat (usec): min=7, max=10079, avg=97.25, stdev=473.62 00:10:33.595 clat (usec): min=258, max=20309, avg=12843.88, stdev=1757.59 00:10:33.595 lat (usec): min=2641, max=20324, avg=12941.14, stdev=1701.34 00:10:33.595 clat percentiles (usec): 00:10:33.595 | 1.00th=[ 6194], 5.00th=[11600], 10.00th=[11994], 20.00th=[12387], 00:10:33.595 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[12911], 00:10:33.595 | 70.00th=[13042], 80.00th=[13173], 90.00th=[13435], 95.00th=[15270], 00:10:33.595 | 99.00th=[20317], 99.50th=[20317], 99.90th=[20317], 99.95th=[20317], 00:10:33.595 | 99.99th=[20317] 00:10:33.595 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:10:33.595 slat (usec): min=10, max=5296, avg=98.38, stdev=424.30 00:10:33.595 clat (usec): min=9027, max=16283, avg=12894.35, stdev=835.62 00:10:33.595 lat (usec): min=10604, max=16335, avg=12992.74, stdev=725.75 00:10:33.595 clat percentiles (usec): 00:10:33.595 | 1.00th=[10290], 5.00th=[11731], 10.00th=[12125], 20.00th=[12387], 00:10:33.595 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:10:33.595 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13566], 95.00th=[13829], 00:10:33.595 | 99.00th=[15926], 99.50th=[16057], 99.90th=[16188], 99.95th=[16188], 00:10:33.595 | 99.99th=[16319] 00:10:33.595 bw ( KiB/s): min=20480, max=20480, per=30.58%, avg=20480.00, stdev= 0.00, samples=1 00:10:33.595 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:10:33.595 lat (usec) : 500=0.01% 00:10:33.595 lat (msec) : 4=0.33%, 10=1.15%, 20=97.36%, 50=1.15% 00:10:33.595 cpu : usr=4.30%, sys=13.89%, ctx=310, majf=0, minf=1 00:10:33.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:33.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.595 issued rwts: total=4705,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.595 job3: (groupid=0, jobs=1): err= 0: pid=75421: Sun Dec 15 05:50:54 2024 00:10:33.595 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:33.595 slat (usec): min=10, max=8141, avg=176.97, stdev=901.84 00:10:33.595 clat (usec): min=17559, max=32975, avg=24491.57, stdev=2440.52 00:10:33.595 lat (usec): min=19892, max=33001, avg=24668.54, stdev=2265.58 00:10:33.595 clat percentiles (usec): 00:10:33.595 | 1.00th=[17957], 5.00th=[22414], 10.00th=[22676], 20.00th=[22938], 00:10:33.595 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23987], 00:10:33.595 | 70.00th=[26084], 80.00th=[26608], 90.00th=[27132], 95.00th=[28443], 00:10:33.595 | 99.00th=[32637], 99.50th=[32900], 99.90th=[32900], 99.95th=[32900], 00:10:33.595 | 99.99th=[32900] 00:10:33.595 write: IOPS=2757, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1001msec); 0 zone resets 00:10:33.595 slat (usec): min=11, max=8668, avg=188.54, stdev=909.48 00:10:33.595 clat (usec): min=868, max=27952, avg=22807.67, stdev=3220.94 00:10:33.595 lat (usec): min=894, max=27994, avg=22996.22, stdev=3118.41 00:10:33.595 clat percentiles (usec): 00:10:33.595 | 1.00th=[ 6521], 5.00th=[17433], 10.00th=[20055], 20.00th=[20841], 00:10:33.595 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23725], 60.00th=[23725], 00:10:33.595 | 70.00th=[23987], 80.00th=[24511], 90.00th=[25035], 95.00th=[26870], 00:10:33.595 | 99.00th=[27919], 99.50th=[27919], 99.90th=[27919], 99.95th=[27919], 00:10:33.595 | 99.99th=[27919] 00:10:33.595 bw ( KiB/s): min=12288, max=12288, per=18.35%, avg=12288.00, stdev= 0.00, samples=1 00:10:33.595 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:33.595 lat (usec) : 1000=0.09% 00:10:33.595 lat (msec) : 2=0.06%, 10=0.60%, 20=5.64%, 50=93.61% 00:10:33.595 cpu : usr=3.30%, sys=8.60%, ctx=167, majf=0, minf=11 00:10:33.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:33.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.595 issued rwts: total=2560,2760,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.595 00:10:33.595 Run status group 0 (all jobs): 00:10:33.595 READ: bw=60.3MiB/s (63.2MB/s), 9.98MiB/s-22.0MiB/s (10.5MB/s-23.0MB/s), io=60.4MiB (63.3MB), run=1001-1002msec 00:10:33.595 WRITE: bw=65.4MiB/s (68.6MB/s), 10.8MiB/s-23.1MiB/s (11.3MB/s-24.2MB/s), io=65.5MiB (68.7MB), run=1001-1002msec 00:10:33.595 00:10:33.595 Disk stats (read/write): 00:10:33.595 nvme0n1: ios=4850/5120, merge=0/0, ticks=11203/12117, in_queue=23320, util=88.38% 00:10:33.595 nvme0n2: ios=2193/2560, merge=0/0, ticks=12004/13098, in_queue=25102, util=88.27% 00:10:33.595 nvme0n3: ios=4096/4352, merge=0/0, ticks=11695/12145, in_queue=23840, util=88.75% 00:10:33.595 nvme0n4: ios=2080/2560, merge=0/0, ticks=11506/13490, in_queue=24996, util=89.71% 00:10:33.595 05:50:54 -- target/fio.sh@55 -- # sync 00:10:33.595 05:50:54 -- target/fio.sh@59 -- # fio_pid=75435 00:10:33.595 05:50:54 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:33.595 05:50:54 -- target/fio.sh@61 -- # sleep 3 00:10:33.595 [global] 00:10:33.595 thread=1 00:10:33.595 invalidate=1 00:10:33.595 rw=read 00:10:33.595 time_based=1 00:10:33.595 runtime=10 00:10:33.595 ioengine=libaio 00:10:33.595 direct=1 00:10:33.595 bs=4096 00:10:33.595 iodepth=1 00:10:33.595 norandommap=1 00:10:33.595 numjobs=1 00:10:33.595 00:10:33.595 [job0] 00:10:33.595 filename=/dev/nvme0n1 00:10:33.595 [job1] 00:10:33.595 filename=/dev/nvme0n2 00:10:33.595 [job2] 00:10:33.595 filename=/dev/nvme0n3 00:10:33.595 [job3] 00:10:33.595 filename=/dev/nvme0n4 00:10:33.595 Could not set queue depth (nvme0n1) 00:10:33.595 Could not set queue depth (nvme0n2) 00:10:33.596 Could not set queue depth (nvme0n3) 00:10:33.596 Could not set queue depth (nvme0n4) 00:10:33.596 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:33.596 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:33.596 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:33.596 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:33.596 fio-3.35 00:10:33.596 Starting 4 threads 00:10:36.877 05:50:57 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:36.877 fio: pid=75483, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:36.877 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=61992960, buflen=4096 00:10:36.877 05:50:58 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:36.877 fio: pid=75482, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:36.877 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=66416640, buflen=4096 00:10:37.135 05:50:58 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:37.135 05:50:58 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:37.135 fio: pid=75480, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:37.135 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11112448, buflen=4096 00:10:37.393 05:50:58 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:37.393 05:50:58 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:37.393 fio: pid=75481, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:37.393 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=13615104, buflen=4096 00:10:37.651 05:50:59 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:37.651 05:50:59 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:37.651 00:10:37.651 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75480: Sun Dec 15 05:50:59 2024 00:10:37.651 read: IOPS=5476, BW=21.4MiB/s (22.4MB/s)(74.6MiB/3487msec) 00:10:37.651 slat (usec): min=7, max=10554, avg=17.19, stdev=135.95 00:10:37.651 clat (usec): min=116, max=3650, avg=163.95, stdev=35.21 00:10:37.651 lat (usec): min=128, max=10720, avg=181.14, stdev=140.40 00:10:37.651 clat percentiles (usec): 00:10:37.651 | 1.00th=[ 131], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 151], 00:10:37.651 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:10:37.651 | 70.00th=[ 169], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 192], 00:10:37.651 | 99.00th=[ 206], 99.50th=[ 212], 99.90th=[ 237], 99.95th=[ 461], 00:10:37.651 | 99.99th=[ 1663] 00:10:37.651 bw ( KiB/s): min=21480, max=22296, per=29.39%, avg=22065.33, stdev=330.99, samples=6 00:10:37.651 iops : min= 5370, max= 5574, avg=5516.33, stdev=82.75, samples=6 00:10:37.651 lat (usec) : 250=99.92%, 500=0.04%, 750=0.01%, 1000=0.01% 00:10:37.651 lat (msec) : 2=0.02%, 4=0.01% 00:10:37.651 cpu : usr=2.07%, sys=7.17%, ctx=19105, majf=0, minf=1 00:10:37.651 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.651 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.651 issued rwts: total=19098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.651 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.651 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75481: Sun Dec 15 05:50:59 2024 00:10:37.651 read: IOPS=5272, BW=20.6MiB/s (21.6MB/s)(77.0MiB/3738msec) 00:10:37.652 slat (usec): min=7, max=15958, avg=18.52, stdev=191.11 00:10:37.652 clat (usec): min=37, max=7442, avg=169.61, stdev=70.14 00:10:37.652 lat (usec): min=122, max=16168, avg=188.13, stdev=204.01 00:10:37.652 clat percentiles (usec): 00:10:37.652 | 1.00th=[ 123], 5.00th=[ 135], 10.00th=[ 141], 20.00th=[ 147], 00:10:37.652 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:10:37.652 | 70.00th=[ 172], 80.00th=[ 180], 90.00th=[ 198], 95.00th=[ 239], 00:10:37.652 | 99.00th=[ 334], 99.50th=[ 351], 99.90th=[ 412], 99.95th=[ 881], 00:10:37.652 | 99.99th=[ 2606] 00:10:37.652 bw ( KiB/s): min=15736, max=22408, per=27.95%, avg=20984.86, stdev=2500.86, samples=7 00:10:37.652 iops : min= 3934, max= 5602, avg=5246.14, stdev=625.26, samples=7 00:10:37.652 lat (usec) : 50=0.01%, 100=0.01%, 250=95.74%, 500=4.17%, 750=0.03% 00:10:37.652 lat (usec) : 1000=0.02% 00:10:37.652 lat (msec) : 2=0.03%, 4=0.01%, 10=0.01% 00:10:37.652 cpu : usr=1.53%, sys=7.12%, ctx=19732, majf=0, minf=2 00:10:37.652 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.652 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.652 issued rwts: total=19709,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.652 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.652 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75482: Sun Dec 15 05:50:59 2024 00:10:37.652 read: IOPS=5058, BW=19.8MiB/s (20.7MB/s)(63.3MiB/3206msec) 00:10:37.652 slat (usec): min=7, max=8479, avg=15.09, stdev=90.32 00:10:37.652 clat (usec): min=129, max=1836, avg=181.22, stdev=42.77 00:10:37.652 lat (usec): min=141, max=8688, avg=196.31, stdev=100.04 00:10:37.652 clat percentiles (usec): 00:10:37.652 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:10:37.652 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:10:37.652 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 208], 95.00th=[ 255], 00:10:37.652 | 99.00th=[ 338], 99.50th=[ 355], 99.90th=[ 400], 99.95th=[ 652], 00:10:37.652 | 99.99th=[ 1614] 00:10:37.652 bw ( KiB/s): min=17032, max=21416, per=27.49%, avg=20637.33, stdev=1766.86, samples=6 00:10:37.652 iops : min= 4258, max= 5354, avg=5159.33, stdev=441.71, samples=6 00:10:37.652 lat (usec) : 250=94.72%, 500=5.20%, 750=0.03%, 1000=0.02% 00:10:37.652 lat (msec) : 2=0.02% 00:10:37.652 cpu : usr=1.34%, sys=6.65%, ctx=16223, majf=0, minf=2 00:10:37.652 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.652 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.652 issued rwts: total=16216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.652 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.652 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75483: Sun Dec 15 05:50:59 2024 00:10:37.652 read: IOPS=5125, BW=20.0MiB/s (21.0MB/s)(59.1MiB/2953msec) 00:10:37.652 slat (nsec): min=7615, max=75662, avg=15173.45, stdev=4436.16 00:10:37.652 clat (usec): min=133, max=1777, avg=178.40, stdev=42.24 00:10:37.652 lat (usec): min=148, max=1792, avg=193.57, stdev=42.26 00:10:37.652 clat percentiles (usec): 00:10:37.652 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:10:37.652 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 176], 00:10:37.652 | 70.00th=[ 180], 80.00th=[ 188], 90.00th=[ 198], 95.00th=[ 223], 00:10:37.652 | 99.00th=[ 338], 99.50th=[ 355], 99.90th=[ 412], 99.95th=[ 619], 00:10:37.652 | 99.99th=[ 1631] 00:10:37.652 bw ( KiB/s): min=21016, max=21424, per=28.37%, avg=21299.20, stdev=164.98, samples=5 00:10:37.652 iops : min= 5254, max= 5356, avg=5324.80, stdev=41.25, samples=5 00:10:37.652 lat (usec) : 250=95.55%, 500=4.37%, 750=0.03%, 1000=0.01% 00:10:37.652 lat (msec) : 2=0.03% 00:10:37.652 cpu : usr=1.66%, sys=7.05%, ctx=15138, majf=0, minf=2 00:10:37.652 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.652 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.652 issued rwts: total=15136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.652 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.652 00:10:37.652 Run status group 0 (all jobs): 00:10:37.652 READ: bw=73.3MiB/s (76.9MB/s), 19.8MiB/s-21.4MiB/s (20.7MB/s-22.4MB/s), io=274MiB (287MB), run=2953-3738msec 00:10:37.652 00:10:37.652 Disk stats (read/write): 00:10:37.652 nvme0n1: ios=18505/0, merge=0/0, ticks=3083/0, in_queue=3083, util=95.39% 00:10:37.652 nvme0n2: ios=18920/0, merge=0/0, ticks=3285/0, in_queue=3285, util=95.13% 00:10:37.652 nvme0n3: ios=15861/0, merge=0/0, ticks=2869/0, in_queue=2869, util=96.37% 00:10:37.652 nvme0n4: ios=14880/0, merge=0/0, ticks=2650/0, in_queue=2650, util=96.79% 00:10:37.910 05:50:59 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:37.910 05:50:59 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:38.167 05:50:59 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:38.167 05:50:59 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:38.424 05:50:59 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:38.425 05:50:59 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:38.682 05:51:00 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:38.682 05:51:00 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:38.940 05:51:00 -- target/fio.sh@69 -- # fio_status=0 00:10:38.940 05:51:00 -- target/fio.sh@70 -- # wait 75435 00:10:38.940 05:51:00 -- target/fio.sh@70 -- # fio_status=4 00:10:38.940 05:51:00 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:38.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.940 05:51:00 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:38.940 05:51:00 -- common/autotest_common.sh@1208 -- # local i=0 00:10:38.940 05:51:00 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:10:38.940 05:51:00 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:38.940 05:51:00 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:10:38.940 05:51:00 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:38.940 nvmf hotplug test: fio failed as expected 00:10:38.940 05:51:00 -- common/autotest_common.sh@1220 -- # return 0 00:10:38.940 05:51:00 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:38.940 05:51:00 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:38.940 05:51:00 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:39.198 05:51:00 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:39.198 05:51:00 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:39.198 05:51:00 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:39.198 05:51:00 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:39.198 05:51:00 -- target/fio.sh@91 -- # nvmftestfini 00:10:39.198 05:51:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:39.198 05:51:00 -- nvmf/common.sh@116 -- # sync 00:10:39.198 05:51:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:39.198 05:51:00 -- nvmf/common.sh@119 -- # set +e 00:10:39.198 05:51:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:39.198 05:51:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:39.198 rmmod nvme_tcp 00:10:39.198 rmmod nvme_fabrics 00:10:39.198 rmmod nvme_keyring 00:10:39.198 05:51:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:39.198 05:51:00 -- nvmf/common.sh@123 -- # set -e 00:10:39.198 05:51:00 -- nvmf/common.sh@124 -- # return 0 00:10:39.198 05:51:00 -- nvmf/common.sh@477 -- # '[' -n 75054 ']' 00:10:39.198 05:51:00 -- nvmf/common.sh@478 -- # killprocess 75054 00:10:39.198 05:51:00 -- common/autotest_common.sh@936 -- # '[' -z 75054 ']' 00:10:39.198 05:51:00 -- common/autotest_common.sh@940 -- # kill -0 75054 00:10:39.198 05:51:00 -- common/autotest_common.sh@941 -- # uname 00:10:39.198 05:51:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:39.198 05:51:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75054 00:10:39.198 killing process with pid 75054 00:10:39.198 05:51:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:39.198 05:51:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:39.198 05:51:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75054' 00:10:39.198 05:51:00 -- common/autotest_common.sh@955 -- # kill 75054 00:10:39.198 05:51:00 -- common/autotest_common.sh@960 -- # wait 75054 00:10:39.458 05:51:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:39.458 05:51:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:39.458 05:51:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:39.458 05:51:00 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:39.458 05:51:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:39.458 05:51:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.458 05:51:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:39.458 05:51:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.458 05:51:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:39.458 00:10:39.458 real 0m18.842s 00:10:39.458 user 1m11.197s 00:10:39.458 sys 0m10.500s 00:10:39.458 05:51:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:39.458 05:51:00 -- common/autotest_common.sh@10 -- # set +x 00:10:39.458 ************************************ 00:10:39.458 END TEST nvmf_fio_target 00:10:39.458 ************************************ 00:10:39.458 05:51:00 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:39.458 05:51:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:39.458 05:51:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:39.458 05:51:00 -- common/autotest_common.sh@10 -- # set +x 00:10:39.458 ************************************ 00:10:39.458 START TEST nvmf_bdevio 00:10:39.458 ************************************ 00:10:39.458 05:51:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:39.458 * Looking for test storage... 00:10:39.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:39.458 05:51:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:39.458 05:51:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:39.458 05:51:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:39.718 05:51:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:39.718 05:51:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:39.718 05:51:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:39.718 05:51:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:39.718 05:51:01 -- scripts/common.sh@335 -- # IFS=.-: 00:10:39.718 05:51:01 -- scripts/common.sh@335 -- # read -ra ver1 00:10:39.718 05:51:01 -- scripts/common.sh@336 -- # IFS=.-: 00:10:39.718 05:51:01 -- scripts/common.sh@336 -- # read -ra ver2 00:10:39.718 05:51:01 -- scripts/common.sh@337 -- # local 'op=<' 00:10:39.718 05:51:01 -- scripts/common.sh@339 -- # ver1_l=2 00:10:39.718 05:51:01 -- scripts/common.sh@340 -- # ver2_l=1 00:10:39.718 05:51:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:39.718 05:51:01 -- scripts/common.sh@343 -- # case "$op" in 00:10:39.718 05:51:01 -- scripts/common.sh@344 -- # : 1 00:10:39.718 05:51:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:39.718 05:51:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:39.718 05:51:01 -- scripts/common.sh@364 -- # decimal 1 00:10:39.718 05:51:01 -- scripts/common.sh@352 -- # local d=1 00:10:39.718 05:51:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:39.718 05:51:01 -- scripts/common.sh@354 -- # echo 1 00:10:39.718 05:51:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:39.718 05:51:01 -- scripts/common.sh@365 -- # decimal 2 00:10:39.718 05:51:01 -- scripts/common.sh@352 -- # local d=2 00:10:39.718 05:51:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:39.718 05:51:01 -- scripts/common.sh@354 -- # echo 2 00:10:39.718 05:51:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:39.718 05:51:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:39.718 05:51:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:39.718 05:51:01 -- scripts/common.sh@367 -- # return 0 00:10:39.718 05:51:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:39.718 05:51:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:39.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.718 --rc genhtml_branch_coverage=1 00:10:39.718 --rc genhtml_function_coverage=1 00:10:39.718 --rc genhtml_legend=1 00:10:39.718 --rc geninfo_all_blocks=1 00:10:39.718 --rc geninfo_unexecuted_blocks=1 00:10:39.718 00:10:39.718 ' 00:10:39.718 05:51:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:39.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.718 --rc genhtml_branch_coverage=1 00:10:39.718 --rc genhtml_function_coverage=1 00:10:39.718 --rc genhtml_legend=1 00:10:39.718 --rc geninfo_all_blocks=1 00:10:39.718 --rc geninfo_unexecuted_blocks=1 00:10:39.718 00:10:39.718 ' 00:10:39.718 05:51:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:39.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.718 --rc genhtml_branch_coverage=1 00:10:39.718 --rc genhtml_function_coverage=1 00:10:39.718 --rc genhtml_legend=1 00:10:39.718 --rc geninfo_all_blocks=1 00:10:39.718 --rc geninfo_unexecuted_blocks=1 00:10:39.718 00:10:39.718 ' 00:10:39.718 05:51:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:39.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.718 --rc genhtml_branch_coverage=1 00:10:39.718 --rc genhtml_function_coverage=1 00:10:39.718 --rc genhtml_legend=1 00:10:39.718 --rc geninfo_all_blocks=1 00:10:39.718 --rc geninfo_unexecuted_blocks=1 00:10:39.718 00:10:39.718 ' 00:10:39.718 05:51:01 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:39.718 05:51:01 -- nvmf/common.sh@7 -- # uname -s 00:10:39.718 05:51:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:39.718 05:51:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:39.718 05:51:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:39.718 05:51:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:39.718 05:51:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:39.718 05:51:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:39.718 05:51:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:39.718 05:51:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:39.718 05:51:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:39.718 05:51:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:39.718 05:51:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:10:39.718 05:51:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:10:39.718 05:51:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:39.718 05:51:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:39.718 05:51:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:39.718 05:51:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:39.718 05:51:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:39.718 05:51:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:39.718 05:51:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:39.719 05:51:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.719 05:51:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.719 05:51:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.719 05:51:01 -- paths/export.sh@5 -- # export PATH 00:10:39.719 05:51:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.719 05:51:01 -- nvmf/common.sh@46 -- # : 0 00:10:39.719 05:51:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:39.719 05:51:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:39.719 05:51:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:39.719 05:51:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:39.719 05:51:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:39.719 05:51:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:39.719 05:51:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:39.719 05:51:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:39.719 05:51:01 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:39.719 05:51:01 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:39.719 05:51:01 -- target/bdevio.sh@14 -- # nvmftestinit 00:10:39.719 05:51:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:39.719 05:51:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:39.719 05:51:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:39.719 05:51:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:39.719 05:51:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:39.719 05:51:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.719 05:51:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:39.719 05:51:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.719 05:51:01 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:39.719 05:51:01 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:39.719 05:51:01 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:39.719 05:51:01 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:39.719 05:51:01 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:39.719 05:51:01 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:39.719 05:51:01 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:39.719 05:51:01 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:39.719 05:51:01 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:39.719 05:51:01 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:39.719 05:51:01 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:39.719 05:51:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:39.719 05:51:01 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:39.719 05:51:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:39.719 05:51:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:39.719 05:51:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:39.719 05:51:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:39.719 05:51:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:39.719 05:51:01 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:39.719 05:51:01 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:39.719 Cannot find device "nvmf_tgt_br" 00:10:39.719 05:51:01 -- nvmf/common.sh@154 -- # true 00:10:39.719 05:51:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:39.719 Cannot find device "nvmf_tgt_br2" 00:10:39.719 05:51:01 -- nvmf/common.sh@155 -- # true 00:10:39.719 05:51:01 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:39.719 05:51:01 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:39.719 Cannot find device "nvmf_tgt_br" 00:10:39.719 05:51:01 -- nvmf/common.sh@157 -- # true 00:10:39.719 05:51:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:39.719 Cannot find device "nvmf_tgt_br2" 00:10:39.719 05:51:01 -- nvmf/common.sh@158 -- # true 00:10:39.719 05:51:01 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:39.719 05:51:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:39.719 05:51:01 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:39.719 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:39.719 05:51:01 -- nvmf/common.sh@161 -- # true 00:10:39.719 05:51:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:39.719 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:39.719 05:51:01 -- nvmf/common.sh@162 -- # true 00:10:39.719 05:51:01 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:39.719 05:51:01 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:39.719 05:51:01 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:39.719 05:51:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:39.719 05:51:01 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:39.719 05:51:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:39.978 05:51:01 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:39.978 05:51:01 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:39.978 05:51:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:39.978 05:51:01 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:39.978 05:51:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:39.978 05:51:01 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:39.978 05:51:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:39.978 05:51:01 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:39.978 05:51:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:39.978 05:51:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:39.978 05:51:01 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:39.978 05:51:01 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:39.978 05:51:01 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:39.978 05:51:01 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:39.978 05:51:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:39.978 05:51:01 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:39.978 05:51:01 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:39.978 05:51:01 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:39.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:39.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:10:39.978 00:10:39.978 --- 10.0.0.2 ping statistics --- 00:10:39.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.978 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:10:39.978 05:51:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:39.978 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:39.978 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:10:39.978 00:10:39.978 --- 10.0.0.3 ping statistics --- 00:10:39.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.978 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:10:39.978 05:51:01 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:39.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:39.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:39.978 00:10:39.978 --- 10.0.0.1 ping statistics --- 00:10:39.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.978 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:39.978 05:51:01 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:39.978 05:51:01 -- nvmf/common.sh@421 -- # return 0 00:10:39.978 05:51:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:39.978 05:51:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:39.978 05:51:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:39.978 05:51:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:39.978 05:51:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:39.978 05:51:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:39.978 05:51:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:39.978 05:51:01 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:39.978 05:51:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:39.978 05:51:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:39.978 05:51:01 -- common/autotest_common.sh@10 -- # set +x 00:10:39.978 05:51:01 -- nvmf/common.sh@469 -- # nvmfpid=75752 00:10:39.978 05:51:01 -- nvmf/common.sh@470 -- # waitforlisten 75752 00:10:39.978 05:51:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:39.978 05:51:01 -- common/autotest_common.sh@829 -- # '[' -z 75752 ']' 00:10:39.978 05:51:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.978 05:51:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:39.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.978 05:51:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.978 05:51:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:39.978 05:51:01 -- common/autotest_common.sh@10 -- # set +x 00:10:39.978 [2024-12-15 05:51:01.552323] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:39.978 [2024-12-15 05:51:01.552421] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.237 [2024-12-15 05:51:01.685453] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:40.237 [2024-12-15 05:51:01.718966] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:40.237 [2024-12-15 05:51:01.719109] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.237 [2024-12-15 05:51:01.719122] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.237 [2024-12-15 05:51:01.719130] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.237 [2024-12-15 05:51:01.719325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:40.237 [2024-12-15 05:51:01.719544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:10:40.237 [2024-12-15 05:51:01.719641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:10:40.237 [2024-12-15 05:51:01.719702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.237 05:51:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:40.237 05:51:01 -- common/autotest_common.sh@862 -- # return 0 00:10:40.237 05:51:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:40.237 05:51:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:40.237 05:51:01 -- common/autotest_common.sh@10 -- # set +x 00:10:40.237 05:51:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.237 05:51:01 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:40.237 05:51:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.237 05:51:01 -- common/autotest_common.sh@10 -- # set +x 00:10:40.237 [2024-12-15 05:51:01.845717] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:40.496 05:51:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.496 05:51:01 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:40.496 05:51:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.496 05:51:01 -- common/autotest_common.sh@10 -- # set +x 00:10:40.496 Malloc0 00:10:40.496 05:51:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.496 05:51:01 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:40.496 05:51:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.496 05:51:01 -- common/autotest_common.sh@10 -- # set +x 00:10:40.496 05:51:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.496 05:51:01 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:40.496 05:51:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.496 05:51:01 -- common/autotest_common.sh@10 -- # set +x 00:10:40.496 05:51:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.496 05:51:01 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.496 05:51:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.496 05:51:01 -- common/autotest_common.sh@10 -- # set +x 00:10:40.496 [2024-12-15 05:51:01.918860] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.496 05:51:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.496 05:51:01 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:40.496 05:51:01 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:40.496 05:51:01 -- nvmf/common.sh@520 -- # config=() 00:10:40.496 05:51:01 -- nvmf/common.sh@520 -- # local subsystem config 00:10:40.496 05:51:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:40.496 05:51:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:40.496 { 00:10:40.496 "params": { 00:10:40.496 "name": "Nvme$subsystem", 00:10:40.496 "trtype": "$TEST_TRANSPORT", 00:10:40.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:40.496 "adrfam": "ipv4", 00:10:40.496 "trsvcid": "$NVMF_PORT", 00:10:40.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:40.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:40.496 "hdgst": ${hdgst:-false}, 00:10:40.496 "ddgst": ${ddgst:-false} 00:10:40.496 }, 00:10:40.496 "method": "bdev_nvme_attach_controller" 00:10:40.496 } 00:10:40.496 EOF 00:10:40.496 )") 00:10:40.496 05:51:01 -- nvmf/common.sh@542 -- # cat 00:10:40.496 05:51:01 -- nvmf/common.sh@544 -- # jq . 00:10:40.496 05:51:01 -- nvmf/common.sh@545 -- # IFS=, 00:10:40.496 05:51:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:40.496 "params": { 00:10:40.496 "name": "Nvme1", 00:10:40.496 "trtype": "tcp", 00:10:40.496 "traddr": "10.0.0.2", 00:10:40.496 "adrfam": "ipv4", 00:10:40.496 "trsvcid": "4420", 00:10:40.496 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:40.496 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:40.496 "hdgst": false, 00:10:40.496 "ddgst": false 00:10:40.496 }, 00:10:40.496 "method": "bdev_nvme_attach_controller" 00:10:40.496 }' 00:10:40.496 [2024-12-15 05:51:01.972395] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:40.496 [2024-12-15 05:51:01.972622] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75775 ] 00:10:40.497 [2024-12-15 05:51:02.117720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:40.756 [2024-12-15 05:51:02.160365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.756 [2024-12-15 05:51:02.160512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.756 [2024-12-15 05:51:02.160516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.756 [2024-12-15 05:51:02.295837] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:40.756 [2024-12-15 05:51:02.296129] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:40.756 I/O targets: 00:10:40.756 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:40.756 00:10:40.756 00:10:40.756 CUnit - A unit testing framework for C - Version 2.1-3 00:10:40.756 http://cunit.sourceforge.net/ 00:10:40.756 00:10:40.756 00:10:40.756 Suite: bdevio tests on: Nvme1n1 00:10:40.756 Test: blockdev write read block ...passed 00:10:40.756 Test: blockdev write zeroes read block ...passed 00:10:40.756 Test: blockdev write zeroes read no split ...passed 00:10:40.756 Test: blockdev write zeroes read split ...passed 00:10:40.756 Test: blockdev write zeroes read split partial ...passed 00:10:40.756 Test: blockdev reset ...[2024-12-15 05:51:02.328264] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:40.756 [2024-12-15 05:51:02.328582] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18af2a0 (9): Bad file descriptor 00:10:40.756 [2024-12-15 05:51:02.345219] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:40.756 passed 00:10:40.756 Test: blockdev write read 8 blocks ...passed 00:10:40.756 Test: blockdev write read size > 128k ...passed 00:10:40.756 Test: blockdev write read invalid size ...passed 00:10:40.756 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:40.756 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:40.756 Test: blockdev write read max offset ...passed 00:10:40.756 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:40.756 Test: blockdev writev readv 8 blocks ...passed 00:10:40.756 Test: blockdev writev readv 30 x 1block ...passed 00:10:40.756 Test: blockdev writev readv block ...passed 00:10:40.756 Test: blockdev writev readv size > 128k ...passed 00:10:40.756 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:40.756 Test: blockdev comparev and writev ...[2024-12-15 05:51:02.355747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:40.756 [2024-12-15 05:51:02.356006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:40.756 [2024-12-15 05:51:02.356044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:40.756 [2024-12-15 05:51:02.356060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:40.756 [2024-12-15 05:51:02.356395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:40.756 [2024-12-15 05:51:02.356428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:40.756 [2024-12-15 05:51:02.356452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:40.756 [2024-12-15 05:51:02.356465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:40.756 [2024-12-15 05:51:02.356760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:40.756 [2024-12-15 05:51:02.356787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:40.756 [2024-12-15 05:51:02.356809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:40.756 [2024-12-15 05:51:02.356822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:40.756 [2024-12-15 05:51:02.357287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:40.756 [2024-12-15 05:51:02.357325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:40.756 [2024-12-15 05:51:02.357348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:40.756 [2024-12-15 05:51:02.357362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:40.756 passed 00:10:40.756 Test: blockdev nvme passthru rw ...passed 00:10:40.756 Test: blockdev nvme passthru vendor specific ...[2024-12-15 05:51:02.358823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:40.756 [2024-12-15 05:51:02.358864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:40.757 [2024-12-15 05:51:02.359017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:40.757 [2024-12-15 05:51:02.359038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:40.757 [2024-12-15 05:51:02.359160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:40.757 [2024-12-15 05:51:02.359195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:40.757 passed 00:10:40.757 Test: blockdev nvme admin passthru ...[2024-12-15 05:51:02.359515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:40.757 [2024-12-15 05:51:02.359552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:40.757 passed 00:10:40.757 Test: blockdev copy ...passed 00:10:40.757 00:10:40.757 Run Summary: Type Total Ran Passed Failed Inactive 00:10:40.757 suites 1 1 n/a 0 0 00:10:40.757 tests 23 23 23 0 0 00:10:40.757 asserts 152 152 152 0 n/a 00:10:40.757 00:10:40.757 Elapsed time = 0.166 seconds 00:10:41.016 05:51:02 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:41.016 05:51:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.016 05:51:02 -- common/autotest_common.sh@10 -- # set +x 00:10:41.016 05:51:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.016 05:51:02 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:41.016 05:51:02 -- target/bdevio.sh@30 -- # nvmftestfini 00:10:41.016 05:51:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:41.016 05:51:02 -- nvmf/common.sh@116 -- # sync 00:10:41.016 05:51:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:41.016 05:51:02 -- nvmf/common.sh@119 -- # set +e 00:10:41.016 05:51:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:41.016 05:51:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:41.016 rmmod nvme_tcp 00:10:41.016 rmmod nvme_fabrics 00:10:41.016 rmmod nvme_keyring 00:10:41.016 05:51:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:41.016 05:51:02 -- nvmf/common.sh@123 -- # set -e 00:10:41.016 05:51:02 -- nvmf/common.sh@124 -- # return 0 00:10:41.016 05:51:02 -- nvmf/common.sh@477 -- # '[' -n 75752 ']' 00:10:41.016 05:51:02 -- nvmf/common.sh@478 -- # killprocess 75752 00:10:41.016 05:51:02 -- common/autotest_common.sh@936 -- # '[' -z 75752 ']' 00:10:41.016 05:51:02 -- common/autotest_common.sh@940 -- # kill -0 75752 00:10:41.016 05:51:02 -- common/autotest_common.sh@941 -- # uname 00:10:41.016 05:51:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:41.016 05:51:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75752 00:10:41.016 killing process with pid 75752 00:10:41.016 05:51:02 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:10:41.016 05:51:02 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:10:41.016 05:51:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75752' 00:10:41.016 05:51:02 -- common/autotest_common.sh@955 -- # kill 75752 00:10:41.016 05:51:02 -- common/autotest_common.sh@960 -- # wait 75752 00:10:41.275 05:51:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:41.275 05:51:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:41.275 05:51:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:41.275 05:51:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:41.275 05:51:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:41.275 05:51:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.275 05:51:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:41.275 05:51:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.275 05:51:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:41.275 ************************************ 00:10:41.275 END TEST nvmf_bdevio 00:10:41.275 ************************************ 00:10:41.275 00:10:41.275 real 0m1.882s 00:10:41.275 user 0m5.276s 00:10:41.275 sys 0m0.662s 00:10:41.275 05:51:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:41.275 05:51:02 -- common/autotest_common.sh@10 -- # set +x 00:10:41.275 05:51:02 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:10:41.275 05:51:02 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:10:41.275 05:51:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:41.275 05:51:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:41.275 05:51:02 -- common/autotest_common.sh@10 -- # set +x 00:10:41.275 ************************************ 00:10:41.275 START TEST nvmf_bdevio_no_huge 00:10:41.275 ************************************ 00:10:41.275 05:51:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:10:41.535 * Looking for test storage... 00:10:41.535 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:41.535 05:51:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:41.535 05:51:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:41.535 05:51:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:41.535 05:51:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:41.535 05:51:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:41.535 05:51:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:41.535 05:51:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:41.535 05:51:03 -- scripts/common.sh@335 -- # IFS=.-: 00:10:41.535 05:51:03 -- scripts/common.sh@335 -- # read -ra ver1 00:10:41.535 05:51:03 -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.535 05:51:03 -- scripts/common.sh@336 -- # read -ra ver2 00:10:41.535 05:51:03 -- scripts/common.sh@337 -- # local 'op=<' 00:10:41.535 05:51:03 -- scripts/common.sh@339 -- # ver1_l=2 00:10:41.535 05:51:03 -- scripts/common.sh@340 -- # ver2_l=1 00:10:41.535 05:51:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:41.535 05:51:03 -- scripts/common.sh@343 -- # case "$op" in 00:10:41.535 05:51:03 -- scripts/common.sh@344 -- # : 1 00:10:41.535 05:51:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:41.535 05:51:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.535 05:51:03 -- scripts/common.sh@364 -- # decimal 1 00:10:41.535 05:51:03 -- scripts/common.sh@352 -- # local d=1 00:10:41.535 05:51:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.535 05:51:03 -- scripts/common.sh@354 -- # echo 1 00:10:41.535 05:51:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:41.535 05:51:03 -- scripts/common.sh@365 -- # decimal 2 00:10:41.535 05:51:03 -- scripts/common.sh@352 -- # local d=2 00:10:41.535 05:51:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.535 05:51:03 -- scripts/common.sh@354 -- # echo 2 00:10:41.535 05:51:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:41.535 05:51:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:41.535 05:51:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:41.535 05:51:03 -- scripts/common.sh@367 -- # return 0 00:10:41.535 05:51:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.535 05:51:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:41.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.535 --rc genhtml_branch_coverage=1 00:10:41.535 --rc genhtml_function_coverage=1 00:10:41.535 --rc genhtml_legend=1 00:10:41.535 --rc geninfo_all_blocks=1 00:10:41.535 --rc geninfo_unexecuted_blocks=1 00:10:41.535 00:10:41.535 ' 00:10:41.535 05:51:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:41.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.535 --rc genhtml_branch_coverage=1 00:10:41.535 --rc genhtml_function_coverage=1 00:10:41.535 --rc genhtml_legend=1 00:10:41.535 --rc geninfo_all_blocks=1 00:10:41.535 --rc geninfo_unexecuted_blocks=1 00:10:41.535 00:10:41.535 ' 00:10:41.535 05:51:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:41.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.535 --rc genhtml_branch_coverage=1 00:10:41.535 --rc genhtml_function_coverage=1 00:10:41.535 --rc genhtml_legend=1 00:10:41.535 --rc geninfo_all_blocks=1 00:10:41.535 --rc geninfo_unexecuted_blocks=1 00:10:41.535 00:10:41.535 ' 00:10:41.535 05:51:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:41.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.535 --rc genhtml_branch_coverage=1 00:10:41.535 --rc genhtml_function_coverage=1 00:10:41.535 --rc genhtml_legend=1 00:10:41.535 --rc geninfo_all_blocks=1 00:10:41.535 --rc geninfo_unexecuted_blocks=1 00:10:41.535 00:10:41.535 ' 00:10:41.535 05:51:03 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:41.535 05:51:03 -- nvmf/common.sh@7 -- # uname -s 00:10:41.535 05:51:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.535 05:51:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.535 05:51:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.535 05:51:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.535 05:51:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.535 05:51:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.535 05:51:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.535 05:51:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.535 05:51:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.535 05:51:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.535 05:51:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:10:41.535 05:51:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:10:41.535 05:51:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.535 05:51:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.535 05:51:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:41.535 05:51:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:41.535 05:51:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.535 05:51:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.535 05:51:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.535 05:51:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.535 05:51:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.535 05:51:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.535 05:51:03 -- paths/export.sh@5 -- # export PATH 00:10:41.535 05:51:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.535 05:51:03 -- nvmf/common.sh@46 -- # : 0 00:10:41.535 05:51:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:41.535 05:51:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:41.535 05:51:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:41.535 05:51:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.535 05:51:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.535 05:51:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:41.535 05:51:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:41.535 05:51:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:41.535 05:51:03 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:41.535 05:51:03 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:41.535 05:51:03 -- target/bdevio.sh@14 -- # nvmftestinit 00:10:41.535 05:51:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:41.535 05:51:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.535 05:51:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:41.535 05:51:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:41.535 05:51:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:41.535 05:51:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.535 05:51:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:41.536 05:51:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.536 05:51:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:41.536 05:51:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:41.536 05:51:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:41.536 05:51:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:41.536 05:51:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:41.536 05:51:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:41.536 05:51:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.536 05:51:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:41.536 05:51:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:41.536 05:51:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:41.536 05:51:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:41.536 05:51:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:41.536 05:51:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:41.536 05:51:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.536 05:51:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:41.536 05:51:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:41.536 05:51:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:41.536 05:51:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:41.536 05:51:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:41.536 05:51:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:41.536 Cannot find device "nvmf_tgt_br" 00:10:41.536 05:51:03 -- nvmf/common.sh@154 -- # true 00:10:41.536 05:51:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:41.536 Cannot find device "nvmf_tgt_br2" 00:10:41.536 05:51:03 -- nvmf/common.sh@155 -- # true 00:10:41.536 05:51:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:41.536 05:51:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:41.536 Cannot find device "nvmf_tgt_br" 00:10:41.536 05:51:03 -- nvmf/common.sh@157 -- # true 00:10:41.536 05:51:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:41.536 Cannot find device "nvmf_tgt_br2" 00:10:41.536 05:51:03 -- nvmf/common.sh@158 -- # true 00:10:41.536 05:51:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:41.795 05:51:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:41.795 05:51:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:41.795 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:41.795 05:51:03 -- nvmf/common.sh@161 -- # true 00:10:41.795 05:51:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:41.795 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:41.795 05:51:03 -- nvmf/common.sh@162 -- # true 00:10:41.795 05:51:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:41.795 05:51:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:41.795 05:51:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:41.795 05:51:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:41.795 05:51:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:41.795 05:51:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:41.795 05:51:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:41.795 05:51:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:41.795 05:51:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:41.795 05:51:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:41.795 05:51:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:41.795 05:51:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:41.795 05:51:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:41.795 05:51:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:41.795 05:51:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:41.795 05:51:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:41.795 05:51:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:41.795 05:51:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:41.795 05:51:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:41.795 05:51:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:41.795 05:51:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:41.795 05:51:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:41.795 05:51:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:41.795 05:51:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:41.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:10:41.795 00:10:41.795 --- 10.0.0.2 ping statistics --- 00:10:41.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.796 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:10:41.796 05:51:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:41.796 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:41.796 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:10:41.796 00:10:41.796 --- 10.0.0.3 ping statistics --- 00:10:41.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.796 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:10:41.796 05:51:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:41.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:41.796 00:10:41.796 --- 10.0.0.1 ping statistics --- 00:10:41.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.796 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:41.796 05:51:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.796 05:51:03 -- nvmf/common.sh@421 -- # return 0 00:10:41.796 05:51:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:41.796 05:51:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.796 05:51:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:41.796 05:51:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:41.796 05:51:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.796 05:51:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:41.796 05:51:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:42.054 05:51:03 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:42.054 05:51:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:42.054 05:51:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:42.054 05:51:03 -- common/autotest_common.sh@10 -- # set +x 00:10:42.054 05:51:03 -- nvmf/common.sh@469 -- # nvmfpid=75961 00:10:42.054 05:51:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:10:42.055 05:51:03 -- nvmf/common.sh@470 -- # waitforlisten 75961 00:10:42.055 05:51:03 -- common/autotest_common.sh@829 -- # '[' -z 75961 ']' 00:10:42.055 05:51:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.055 05:51:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:42.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.055 05:51:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.055 05:51:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:42.055 05:51:03 -- common/autotest_common.sh@10 -- # set +x 00:10:42.055 [2024-12-15 05:51:03.484765] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:42.055 [2024-12-15 05:51:03.484920] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:10:42.055 [2024-12-15 05:51:03.619400] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:42.055 [2024-12-15 05:51:03.691813] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:42.055 [2024-12-15 05:51:03.692004] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:42.055 [2024-12-15 05:51:03.692018] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:42.055 [2024-12-15 05:51:03.692027] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:42.055 [2024-12-15 05:51:03.692180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:42.055 [2024-12-15 05:51:03.692420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:10:42.055 [2024-12-15 05:51:03.692475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:10:42.055 [2024-12-15 05:51:03.692481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:42.991 05:51:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:42.991 05:51:04 -- common/autotest_common.sh@862 -- # return 0 00:10:42.991 05:51:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:42.991 05:51:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:42.991 05:51:04 -- common/autotest_common.sh@10 -- # set +x 00:10:42.991 05:51:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.991 05:51:04 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:42.991 05:51:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.991 05:51:04 -- common/autotest_common.sh@10 -- # set +x 00:10:42.991 [2024-12-15 05:51:04.563710] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:42.991 05:51:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.991 05:51:04 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:42.991 05:51:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.991 05:51:04 -- common/autotest_common.sh@10 -- # set +x 00:10:42.991 Malloc0 00:10:42.991 05:51:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.991 05:51:04 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:42.991 05:51:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.991 05:51:04 -- common/autotest_common.sh@10 -- # set +x 00:10:42.991 05:51:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.991 05:51:04 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:42.991 05:51:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.991 05:51:04 -- common/autotest_common.sh@10 -- # set +x 00:10:42.991 05:51:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.991 05:51:04 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:42.991 05:51:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.991 05:51:04 -- common/autotest_common.sh@10 -- # set +x 00:10:42.991 [2024-12-15 05:51:04.602288] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:42.991 05:51:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.991 05:51:04 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:10:42.991 05:51:04 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:42.991 05:51:04 -- nvmf/common.sh@520 -- # config=() 00:10:42.991 05:51:04 -- nvmf/common.sh@520 -- # local subsystem config 00:10:42.991 05:51:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:42.991 05:51:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:42.991 { 00:10:42.991 "params": { 00:10:42.991 "name": "Nvme$subsystem", 00:10:42.991 "trtype": "$TEST_TRANSPORT", 00:10:42.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:42.991 "adrfam": "ipv4", 00:10:42.991 "trsvcid": "$NVMF_PORT", 00:10:42.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:42.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:42.991 "hdgst": ${hdgst:-false}, 00:10:42.991 "ddgst": ${ddgst:-false} 00:10:42.991 }, 00:10:42.991 "method": "bdev_nvme_attach_controller" 00:10:42.991 } 00:10:42.991 EOF 00:10:42.991 )") 00:10:42.991 05:51:04 -- nvmf/common.sh@542 -- # cat 00:10:42.991 05:51:04 -- nvmf/common.sh@544 -- # jq . 00:10:42.991 05:51:04 -- nvmf/common.sh@545 -- # IFS=, 00:10:42.991 05:51:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:42.991 "params": { 00:10:42.991 "name": "Nvme1", 00:10:42.991 "trtype": "tcp", 00:10:42.991 "traddr": "10.0.0.2", 00:10:42.991 "adrfam": "ipv4", 00:10:42.991 "trsvcid": "4420", 00:10:42.991 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:42.991 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:42.991 "hdgst": false, 00:10:42.991 "ddgst": false 00:10:42.991 }, 00:10:42.991 "method": "bdev_nvme_attach_controller" 00:10:42.991 }' 00:10:43.250 [2024-12-15 05:51:04.657519] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:43.250 [2024-12-15 05:51:04.657611] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid75998 ] 00:10:43.251 [2024-12-15 05:51:04.796224] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:43.511 [2024-12-15 05:51:04.903381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.511 [2024-12-15 05:51:04.903468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:43.511 [2024-12-15 05:51:04.903474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.511 [2024-12-15 05:51:05.065781] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:43.511 [2024-12-15 05:51:05.066300] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:43.511 I/O targets: 00:10:43.511 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:43.511 00:10:43.511 00:10:43.511 CUnit - A unit testing framework for C - Version 2.1-3 00:10:43.511 http://cunit.sourceforge.net/ 00:10:43.511 00:10:43.511 00:10:43.511 Suite: bdevio tests on: Nvme1n1 00:10:43.511 Test: blockdev write read block ...passed 00:10:43.511 Test: blockdev write zeroes read block ...passed 00:10:43.511 Test: blockdev write zeroes read no split ...passed 00:10:43.511 Test: blockdev write zeroes read split ...passed 00:10:43.511 Test: blockdev write zeroes read split partial ...passed 00:10:43.511 Test: blockdev reset ...[2024-12-15 05:51:05.103947] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:43.511 [2024-12-15 05:51:05.104198] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1090760 (9): Bad file descriptor 00:10:43.511 passed 00:10:43.511 Test: blockdev write read 8 blocks ...[2024-12-15 05:51:05.124509] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:43.511 passed 00:10:43.511 Test: blockdev write read size > 128k ...passed 00:10:43.511 Test: blockdev write read invalid size ...passed 00:10:43.511 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:43.511 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:43.511 Test: blockdev write read max offset ...passed 00:10:43.511 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:43.511 Test: blockdev writev readv 8 blocks ...passed 00:10:43.511 Test: blockdev writev readv 30 x 1block ...passed 00:10:43.511 Test: blockdev writev readv block ...passed 00:10:43.511 Test: blockdev writev readv size > 128k ...passed 00:10:43.511 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:43.511 Test: blockdev comparev and writev ...[2024-12-15 05:51:05.134268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.511 [2024-12-15 05:51:05.134479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:43.511 [2024-12-15 05:51:05.134524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.511 [2024-12-15 05:51:05.134535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:43.511 [2024-12-15 05:51:05.134868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.511 [2024-12-15 05:51:05.134901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:43.511 [2024-12-15 05:51:05.134932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.511 [2024-12-15 05:51:05.134955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:43.511 [2024-12-15 05:51:05.135212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.511 [2024-12-15 05:51:05.135238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:43.511 [2024-12-15 05:51:05.135272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.511 [2024-12-15 05:51:05.135282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:43.511 [2024-12-15 05:51:05.135589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.511 [2024-12-15 05:51:05.135619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:43.511 [2024-12-15 05:51:05.135634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.511 [2024-12-15 05:51:05.135643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:43.511 passed 00:10:43.511 Test: blockdev nvme passthru rw ...passed 00:10:43.511 Test: blockdev nvme passthru vendor specific ...[2024-12-15 05:51:05.136695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:43.511 [2024-12-15 05:51:05.136738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:43.511 passed 00:10:43.511 Test: blockdev nvme admin passthru ...[2024-12-15 05:51:05.137051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:43.511 [2024-12-15 05:51:05.137075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:43.511 [2024-12-15 05:51:05.137196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:43.511 [2024-12-15 05:51:05.137212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:43.511 [2024-12-15 05:51:05.137363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:43.511 [2024-12-15 05:51:05.137378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:43.774 passed 00:10:43.774 Test: blockdev copy ...passed 00:10:43.774 00:10:43.774 Run Summary: Type Total Ran Passed Failed Inactive 00:10:43.774 suites 1 1 n/a 0 0 00:10:43.774 tests 23 23 23 0 0 00:10:43.774 asserts 152 152 152 0 n/a 00:10:43.774 00:10:43.774 Elapsed time = 0.178 seconds 00:10:44.032 05:51:05 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:44.032 05:51:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.032 05:51:05 -- common/autotest_common.sh@10 -- # set +x 00:10:44.032 05:51:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.032 05:51:05 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:44.032 05:51:05 -- target/bdevio.sh@30 -- # nvmftestfini 00:10:44.032 05:51:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:44.032 05:51:05 -- nvmf/common.sh@116 -- # sync 00:10:44.032 05:51:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:44.032 05:51:05 -- nvmf/common.sh@119 -- # set +e 00:10:44.032 05:51:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:44.032 05:51:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:44.032 rmmod nvme_tcp 00:10:44.032 rmmod nvme_fabrics 00:10:44.032 rmmod nvme_keyring 00:10:44.032 05:51:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:44.032 05:51:05 -- nvmf/common.sh@123 -- # set -e 00:10:44.032 05:51:05 -- nvmf/common.sh@124 -- # return 0 00:10:44.032 05:51:05 -- nvmf/common.sh@477 -- # '[' -n 75961 ']' 00:10:44.032 05:51:05 -- nvmf/common.sh@478 -- # killprocess 75961 00:10:44.032 05:51:05 -- common/autotest_common.sh@936 -- # '[' -z 75961 ']' 00:10:44.032 05:51:05 -- common/autotest_common.sh@940 -- # kill -0 75961 00:10:44.032 05:51:05 -- common/autotest_common.sh@941 -- # uname 00:10:44.032 05:51:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:44.032 05:51:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75961 00:10:44.032 killing process with pid 75961 00:10:44.032 05:51:05 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:10:44.032 05:51:05 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:10:44.032 05:51:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75961' 00:10:44.032 05:51:05 -- common/autotest_common.sh@955 -- # kill 75961 00:10:44.032 05:51:05 -- common/autotest_common.sh@960 -- # wait 75961 00:10:44.291 05:51:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:44.291 05:51:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:44.291 05:51:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:44.291 05:51:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:44.291 05:51:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:44.291 05:51:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.291 05:51:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:44.291 05:51:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.291 05:51:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:44.550 00:10:44.550 real 0m3.033s 00:10:44.550 user 0m10.014s 00:10:44.550 sys 0m1.114s 00:10:44.550 ************************************ 00:10:44.551 END TEST nvmf_bdevio_no_huge 00:10:44.551 ************************************ 00:10:44.551 05:51:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:44.551 05:51:05 -- common/autotest_common.sh@10 -- # set +x 00:10:44.551 05:51:05 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:10:44.551 05:51:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:44.551 05:51:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:44.551 05:51:05 -- common/autotest_common.sh@10 -- # set +x 00:10:44.551 ************************************ 00:10:44.551 START TEST nvmf_tls 00:10:44.551 ************************************ 00:10:44.551 05:51:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:10:44.551 * Looking for test storage... 00:10:44.551 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:44.551 05:51:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:44.551 05:51:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:44.551 05:51:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:44.551 05:51:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:44.551 05:51:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:44.551 05:51:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:44.551 05:51:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:44.551 05:51:06 -- scripts/common.sh@335 -- # IFS=.-: 00:10:44.551 05:51:06 -- scripts/common.sh@335 -- # read -ra ver1 00:10:44.551 05:51:06 -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.551 05:51:06 -- scripts/common.sh@336 -- # read -ra ver2 00:10:44.551 05:51:06 -- scripts/common.sh@337 -- # local 'op=<' 00:10:44.551 05:51:06 -- scripts/common.sh@339 -- # ver1_l=2 00:10:44.551 05:51:06 -- scripts/common.sh@340 -- # ver2_l=1 00:10:44.551 05:51:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:44.551 05:51:06 -- scripts/common.sh@343 -- # case "$op" in 00:10:44.551 05:51:06 -- scripts/common.sh@344 -- # : 1 00:10:44.551 05:51:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:44.551 05:51:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.551 05:51:06 -- scripts/common.sh@364 -- # decimal 1 00:10:44.551 05:51:06 -- scripts/common.sh@352 -- # local d=1 00:10:44.551 05:51:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.551 05:51:06 -- scripts/common.sh@354 -- # echo 1 00:10:44.551 05:51:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:44.551 05:51:06 -- scripts/common.sh@365 -- # decimal 2 00:10:44.551 05:51:06 -- scripts/common.sh@352 -- # local d=2 00:10:44.551 05:51:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.551 05:51:06 -- scripts/common.sh@354 -- # echo 2 00:10:44.551 05:51:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:44.551 05:51:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:44.551 05:51:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:44.551 05:51:06 -- scripts/common.sh@367 -- # return 0 00:10:44.551 05:51:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.551 05:51:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:44.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.551 --rc genhtml_branch_coverage=1 00:10:44.551 --rc genhtml_function_coverage=1 00:10:44.551 --rc genhtml_legend=1 00:10:44.551 --rc geninfo_all_blocks=1 00:10:44.551 --rc geninfo_unexecuted_blocks=1 00:10:44.551 00:10:44.551 ' 00:10:44.551 05:51:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:44.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.551 --rc genhtml_branch_coverage=1 00:10:44.551 --rc genhtml_function_coverage=1 00:10:44.551 --rc genhtml_legend=1 00:10:44.551 --rc geninfo_all_blocks=1 00:10:44.551 --rc geninfo_unexecuted_blocks=1 00:10:44.551 00:10:44.551 ' 00:10:44.551 05:51:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:44.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.551 --rc genhtml_branch_coverage=1 00:10:44.551 --rc genhtml_function_coverage=1 00:10:44.551 --rc genhtml_legend=1 00:10:44.551 --rc geninfo_all_blocks=1 00:10:44.551 --rc geninfo_unexecuted_blocks=1 00:10:44.551 00:10:44.551 ' 00:10:44.551 05:51:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:44.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.551 --rc genhtml_branch_coverage=1 00:10:44.551 --rc genhtml_function_coverage=1 00:10:44.551 --rc genhtml_legend=1 00:10:44.551 --rc geninfo_all_blocks=1 00:10:44.551 --rc geninfo_unexecuted_blocks=1 00:10:44.551 00:10:44.551 ' 00:10:44.551 05:51:06 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:44.551 05:51:06 -- nvmf/common.sh@7 -- # uname -s 00:10:44.551 05:51:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.551 05:51:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.551 05:51:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.551 05:51:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.551 05:51:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.551 05:51:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.551 05:51:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.551 05:51:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.551 05:51:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.551 05:51:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.551 05:51:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:10:44.551 05:51:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:10:44.551 05:51:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.551 05:51:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.551 05:51:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:44.551 05:51:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:44.551 05:51:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.551 05:51:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.551 05:51:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.551 05:51:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.551 05:51:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.551 05:51:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.551 05:51:06 -- paths/export.sh@5 -- # export PATH 00:10:44.551 05:51:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.551 05:51:06 -- nvmf/common.sh@46 -- # : 0 00:10:44.551 05:51:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:44.551 05:51:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:44.551 05:51:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:44.551 05:51:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.551 05:51:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.551 05:51:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:44.551 05:51:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:44.551 05:51:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:44.551 05:51:06 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:44.551 05:51:06 -- target/tls.sh@71 -- # nvmftestinit 00:10:44.551 05:51:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:44.551 05:51:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.551 05:51:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:44.551 05:51:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:44.551 05:51:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:44.551 05:51:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.551 05:51:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:44.551 05:51:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.551 05:51:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:44.551 05:51:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:44.551 05:51:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:44.551 05:51:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:44.551 05:51:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:44.551 05:51:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:44.551 05:51:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:44.551 05:51:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:44.551 05:51:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:44.551 05:51:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:44.551 05:51:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:44.551 05:51:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:44.551 05:51:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:44.551 05:51:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:44.551 05:51:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:44.552 05:51:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:44.552 05:51:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:44.552 05:51:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:44.552 05:51:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:44.811 05:51:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:44.811 Cannot find device "nvmf_tgt_br" 00:10:44.811 05:51:06 -- nvmf/common.sh@154 -- # true 00:10:44.811 05:51:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:44.811 Cannot find device "nvmf_tgt_br2" 00:10:44.811 05:51:06 -- nvmf/common.sh@155 -- # true 00:10:44.811 05:51:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:44.811 05:51:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:44.811 Cannot find device "nvmf_tgt_br" 00:10:44.811 05:51:06 -- nvmf/common.sh@157 -- # true 00:10:44.811 05:51:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:44.811 Cannot find device "nvmf_tgt_br2" 00:10:44.811 05:51:06 -- nvmf/common.sh@158 -- # true 00:10:44.811 05:51:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:44.811 05:51:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:44.811 05:51:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:44.811 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:44.811 05:51:06 -- nvmf/common.sh@161 -- # true 00:10:44.811 05:51:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:44.811 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:44.811 05:51:06 -- nvmf/common.sh@162 -- # true 00:10:44.811 05:51:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:44.811 05:51:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:44.811 05:51:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:44.811 05:51:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:44.811 05:51:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:44.811 05:51:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:44.811 05:51:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:44.811 05:51:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:44.811 05:51:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:44.811 05:51:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:44.811 05:51:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:44.811 05:51:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:44.811 05:51:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:44.811 05:51:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:44.811 05:51:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:44.811 05:51:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:44.811 05:51:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:44.811 05:51:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:44.811 05:51:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:44.811 05:51:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:45.070 05:51:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:45.070 05:51:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:45.070 05:51:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:45.070 05:51:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:45.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:45.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:10:45.070 00:10:45.070 --- 10.0.0.2 ping statistics --- 00:10:45.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.070 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:10:45.070 05:51:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:45.070 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:45.070 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:10:45.070 00:10:45.070 --- 10.0.0.3 ping statistics --- 00:10:45.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.070 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:10:45.070 05:51:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:45.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:45.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:10:45.070 00:10:45.070 --- 10.0.0.1 ping statistics --- 00:10:45.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.070 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:10:45.070 05:51:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:45.070 05:51:06 -- nvmf/common.sh@421 -- # return 0 00:10:45.070 05:51:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:45.070 05:51:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:45.070 05:51:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:45.070 05:51:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:45.070 05:51:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:45.070 05:51:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:45.070 05:51:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:45.070 05:51:06 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:10:45.070 05:51:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:45.070 05:51:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:45.070 05:51:06 -- common/autotest_common.sh@10 -- # set +x 00:10:45.070 05:51:06 -- nvmf/common.sh@469 -- # nvmfpid=76180 00:10:45.070 05:51:06 -- nvmf/common.sh@470 -- # waitforlisten 76180 00:10:45.070 05:51:06 -- common/autotest_common.sh@829 -- # '[' -z 76180 ']' 00:10:45.070 05:51:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:10:45.070 05:51:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.070 05:51:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:45.070 05:51:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.070 05:51:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:45.070 05:51:06 -- common/autotest_common.sh@10 -- # set +x 00:10:45.070 [2024-12-15 05:51:06.565820] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:45.070 [2024-12-15 05:51:06.565929] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.071 [2024-12-15 05:51:06.705570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.330 [2024-12-15 05:51:06.745552] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:45.330 [2024-12-15 05:51:06.745716] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:45.330 [2024-12-15 05:51:06.745733] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:45.330 [2024-12-15 05:51:06.745743] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:45.330 [2024-12-15 05:51:06.745772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.330 05:51:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:45.330 05:51:06 -- common/autotest_common.sh@862 -- # return 0 00:10:45.330 05:51:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:45.330 05:51:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:45.330 05:51:06 -- common/autotest_common.sh@10 -- # set +x 00:10:45.330 05:51:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:45.330 05:51:06 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:10:45.330 05:51:06 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:10:45.589 true 00:10:45.589 05:51:07 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:45.589 05:51:07 -- target/tls.sh@82 -- # jq -r .tls_version 00:10:45.848 05:51:07 -- target/tls.sh@82 -- # version=0 00:10:45.848 05:51:07 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:10:45.848 05:51:07 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:10:46.106 05:51:07 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:46.107 05:51:07 -- target/tls.sh@90 -- # jq -r .tls_version 00:10:46.366 05:51:07 -- target/tls.sh@90 -- # version=13 00:10:46.366 05:51:07 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:10:46.366 05:51:07 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:10:46.625 05:51:08 -- target/tls.sh@98 -- # jq -r .tls_version 00:10:46.625 05:51:08 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:46.884 05:51:08 -- target/tls.sh@98 -- # version=7 00:10:46.884 05:51:08 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:10:46.884 05:51:08 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:46.884 05:51:08 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:10:47.142 05:51:08 -- target/tls.sh@105 -- # ktls=false 00:10:47.142 05:51:08 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:10:47.143 05:51:08 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:10:47.401 05:51:08 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:47.401 05:51:08 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:10:47.660 05:51:09 -- target/tls.sh@113 -- # ktls=true 00:10:47.660 05:51:09 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:10:47.660 05:51:09 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:10:47.918 05:51:09 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:10:47.918 05:51:09 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:48.177 05:51:09 -- target/tls.sh@121 -- # ktls=false 00:10:48.177 05:51:09 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:10:48.177 05:51:09 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:10:48.177 05:51:09 -- target/tls.sh@49 -- # local key hash crc 00:10:48.177 05:51:09 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:10:48.177 05:51:09 -- target/tls.sh@51 -- # hash=01 00:10:48.177 05:51:09 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:10:48.177 05:51:09 -- target/tls.sh@52 -- # gzip -1 -c 00:10:48.177 05:51:09 -- target/tls.sh@52 -- # tail -c8 00:10:48.177 05:51:09 -- target/tls.sh@52 -- # head -c 4 00:10:48.177 05:51:09 -- target/tls.sh@52 -- # crc='p$H�' 00:10:48.177 05:51:09 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:10:48.177 05:51:09 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:10:48.177 05:51:09 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:10:48.177 05:51:09 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:10:48.177 05:51:09 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:10:48.177 05:51:09 -- target/tls.sh@49 -- # local key hash crc 00:10:48.177 05:51:09 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:10:48.177 05:51:09 -- target/tls.sh@51 -- # hash=01 00:10:48.177 05:51:09 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:10:48.177 05:51:09 -- target/tls.sh@52 -- # gzip -1 -c 00:10:48.177 05:51:09 -- target/tls.sh@52 -- # tail -c8 00:10:48.177 05:51:09 -- target/tls.sh@52 -- # head -c 4 00:10:48.177 05:51:09 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:10:48.177 05:51:09 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:10:48.177 05:51:09 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:10:48.177 05:51:09 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:10:48.177 05:51:09 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:10:48.177 05:51:09 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:48.177 05:51:09 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:48.177 05:51:09 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:10:48.177 05:51:09 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:10:48.177 05:51:09 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:48.177 05:51:09 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:48.177 05:51:09 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:10:48.436 05:51:10 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:10:49.004 05:51:10 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:49.004 05:51:10 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:49.004 05:51:10 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:10:49.004 [2024-12-15 05:51:10.635253] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.263 05:51:10 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:10:49.263 05:51:10 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:10:49.522 [2024-12-15 05:51:11.111424] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:10:49.522 [2024-12-15 05:51:11.111646] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.522 05:51:11 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:10:49.780 malloc0 00:10:49.780 05:51:11 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:50.039 05:51:11 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:50.297 05:51:11 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:02.503 Initializing NVMe Controllers 00:11:02.503 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:02.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:02.503 Initialization complete. Launching workers. 00:11:02.503 ======================================================== 00:11:02.503 Latency(us) 00:11:02.503 Device Information : IOPS MiB/s Average min max 00:11:02.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10639.08 41.56 6016.88 1418.96 10640.28 00:11:02.503 ======================================================== 00:11:02.503 Total : 10639.08 41.56 6016.88 1418.96 10640.28 00:11:02.503 00:11:02.503 05:51:22 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:02.503 05:51:22 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:02.503 05:51:22 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:02.503 05:51:22 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:02.503 05:51:22 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:11:02.503 05:51:22 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:02.503 05:51:22 -- target/tls.sh@28 -- # bdevperf_pid=76418 00:11:02.503 05:51:22 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:02.503 05:51:22 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:02.503 05:51:22 -- target/tls.sh@31 -- # waitforlisten 76418 /var/tmp/bdevperf.sock 00:11:02.503 05:51:22 -- common/autotest_common.sh@829 -- # '[' -z 76418 ']' 00:11:02.503 05:51:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:02.503 05:51:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:02.503 05:51:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:02.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:02.503 05:51:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:02.503 05:51:22 -- common/autotest_common.sh@10 -- # set +x 00:11:02.503 [2024-12-15 05:51:22.068730] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:02.503 [2024-12-15 05:51:22.069026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76418 ] 00:11:02.503 [2024-12-15 05:51:22.205056] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.503 [2024-12-15 05:51:22.242365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:02.503 05:51:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:02.503 05:51:22 -- common/autotest_common.sh@862 -- # return 0 00:11:02.503 05:51:22 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:02.503 [2024-12-15 05:51:22.602144] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:02.503 TLSTESTn1 00:11:02.503 05:51:22 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:02.503 Running I/O for 10 seconds... 00:11:12.483 00:11:12.483 Latency(us) 00:11:12.483 [2024-12-15T05:51:34.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:12.483 [2024-12-15T05:51:34.124Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:12.483 Verification LBA range: start 0x0 length 0x2000 00:11:12.483 TLSTESTn1 : 10.02 6110.26 23.87 0.00 0.00 20913.81 5153.51 28478.37 00:11:12.483 [2024-12-15T05:51:34.124Z] =================================================================================================================== 00:11:12.483 [2024-12-15T05:51:34.124Z] Total : 6110.26 23.87 0.00 0.00 20913.81 5153.51 28478.37 00:11:12.483 0 00:11:12.483 05:51:32 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:12.483 05:51:32 -- target/tls.sh@45 -- # killprocess 76418 00:11:12.483 05:51:32 -- common/autotest_common.sh@936 -- # '[' -z 76418 ']' 00:11:12.483 05:51:32 -- common/autotest_common.sh@940 -- # kill -0 76418 00:11:12.483 05:51:32 -- common/autotest_common.sh@941 -- # uname 00:11:12.483 05:51:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:12.483 05:51:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76418 00:11:12.483 killing process with pid 76418 00:11:12.483 Received shutdown signal, test time was about 10.000000 seconds 00:11:12.483 00:11:12.483 Latency(us) 00:11:12.483 [2024-12-15T05:51:34.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:12.483 [2024-12-15T05:51:34.124Z] =================================================================================================================== 00:11:12.483 [2024-12-15T05:51:34.124Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:12.483 05:51:32 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:12.483 05:51:32 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:12.483 05:51:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76418' 00:11:12.483 05:51:32 -- common/autotest_common.sh@955 -- # kill 76418 00:11:12.483 05:51:32 -- common/autotest_common.sh@960 -- # wait 76418 00:11:12.483 05:51:32 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:12.483 05:51:32 -- common/autotest_common.sh@650 -- # local es=0 00:11:12.483 05:51:32 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:12.483 05:51:32 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:12.483 05:51:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:12.483 05:51:32 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:12.483 05:51:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:12.483 05:51:32 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:12.483 05:51:32 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:12.483 05:51:32 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:12.483 05:51:32 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:12.483 05:51:32 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:11:12.483 05:51:32 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:12.483 05:51:32 -- target/tls.sh@28 -- # bdevperf_pid=76544 00:11:12.483 05:51:32 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:12.483 05:51:32 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:12.483 05:51:32 -- target/tls.sh@31 -- # waitforlisten 76544 /var/tmp/bdevperf.sock 00:11:12.483 05:51:32 -- common/autotest_common.sh@829 -- # '[' -z 76544 ']' 00:11:12.483 05:51:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:12.483 05:51:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:12.483 05:51:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:12.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:12.483 05:51:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:12.483 05:51:32 -- common/autotest_common.sh@10 -- # set +x 00:11:12.483 [2024-12-15 05:51:33.042218] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:12.483 [2024-12-15 05:51:33.042487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76544 ] 00:11:12.483 [2024-12-15 05:51:33.176578] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.483 [2024-12-15 05:51:33.210673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:12.484 05:51:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:12.484 05:51:34 -- common/autotest_common.sh@862 -- # return 0 00:11:12.484 05:51:34 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:12.743 [2024-12-15 05:51:34.222011] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:12.743 [2024-12-15 05:51:34.227132] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:12.743 [2024-12-15 05:51:34.227764] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a5b80 (107): Transport endpoint is not connected 00:11:12.743 [2024-12-15 05:51:34.228751] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a5b80 (9): Bad file descriptor 00:11:12.743 [2024-12-15 05:51:34.229747] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:11:12.743 [2024-12-15 05:51:34.229767] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:12.743 [2024-12-15 05:51:34.229795] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:12.743 request: 00:11:12.743 { 00:11:12.743 "name": "TLSTEST", 00:11:12.743 "trtype": "tcp", 00:11:12.743 "traddr": "10.0.0.2", 00:11:12.743 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:12.743 "adrfam": "ipv4", 00:11:12.743 "trsvcid": "4420", 00:11:12.743 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:12.743 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt", 00:11:12.743 "method": "bdev_nvme_attach_controller", 00:11:12.743 "req_id": 1 00:11:12.743 } 00:11:12.743 Got JSON-RPC error response 00:11:12.743 response: 00:11:12.743 { 00:11:12.743 "code": -32602, 00:11:12.743 "message": "Invalid parameters" 00:11:12.743 } 00:11:12.743 05:51:34 -- target/tls.sh@36 -- # killprocess 76544 00:11:12.743 05:51:34 -- common/autotest_common.sh@936 -- # '[' -z 76544 ']' 00:11:12.743 05:51:34 -- common/autotest_common.sh@940 -- # kill -0 76544 00:11:12.743 05:51:34 -- common/autotest_common.sh@941 -- # uname 00:11:12.743 05:51:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:12.743 05:51:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76544 00:11:12.743 killing process with pid 76544 00:11:12.743 Received shutdown signal, test time was about 10.000000 seconds 00:11:12.743 00:11:12.743 Latency(us) 00:11:12.743 [2024-12-15T05:51:34.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:12.743 [2024-12-15T05:51:34.384Z] =================================================================================================================== 00:11:12.743 [2024-12-15T05:51:34.384Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:12.743 05:51:34 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:12.743 05:51:34 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:12.743 05:51:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76544' 00:11:12.743 05:51:34 -- common/autotest_common.sh@955 -- # kill 76544 00:11:12.743 05:51:34 -- common/autotest_common.sh@960 -- # wait 76544 00:11:13.002 05:51:34 -- target/tls.sh@37 -- # return 1 00:11:13.002 05:51:34 -- common/autotest_common.sh@653 -- # es=1 00:11:13.002 05:51:34 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:13.002 05:51:34 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:13.002 05:51:34 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:13.002 05:51:34 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:13.002 05:51:34 -- common/autotest_common.sh@650 -- # local es=0 00:11:13.002 05:51:34 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:13.002 05:51:34 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:13.002 05:51:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:13.002 05:51:34 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:13.002 05:51:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:13.002 05:51:34 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:13.002 05:51:34 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:13.002 05:51:34 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:13.002 05:51:34 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:11:13.002 05:51:34 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:11:13.002 05:51:34 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:13.002 05:51:34 -- target/tls.sh@28 -- # bdevperf_pid=76567 00:11:13.002 05:51:34 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:13.002 05:51:34 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:13.002 05:51:34 -- target/tls.sh@31 -- # waitforlisten 76567 /var/tmp/bdevperf.sock 00:11:13.002 05:51:34 -- common/autotest_common.sh@829 -- # '[' -z 76567 ']' 00:11:13.002 05:51:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:13.002 05:51:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:13.002 05:51:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:13.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:13.002 05:51:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:13.002 05:51:34 -- common/autotest_common.sh@10 -- # set +x 00:11:13.002 [2024-12-15 05:51:34.466373] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:13.002 [2024-12-15 05:51:34.466734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76567 ] 00:11:13.002 [2024-12-15 05:51:34.603791] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.002 [2024-12-15 05:51:34.634682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:13.938 05:51:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:13.938 05:51:35 -- common/autotest_common.sh@862 -- # return 0 00:11:13.938 05:51:35 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:14.197 [2024-12-15 05:51:35.677531] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:14.197 [2024-12-15 05:51:35.684638] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:11:14.197 [2024-12-15 05:51:35.684677] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:11:14.197 [2024-12-15 05:51:35.684743] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:14.197 [2024-12-15 05:51:35.685146] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f1b80 (107): Transport endpoint is not connected 00:11:14.197 [2024-12-15 05:51:35.686138] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f1b80 (9): Bad file descriptor 00:11:14.197 [2024-12-15 05:51:35.687135] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:11:14.197 [2024-12-15 05:51:35.687157] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:14.197 [2024-12-15 05:51:35.687168] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:14.197 request: 00:11:14.197 { 00:11:14.197 "name": "TLSTEST", 00:11:14.197 "trtype": "tcp", 00:11:14.198 "traddr": "10.0.0.2", 00:11:14.198 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:11:14.198 "adrfam": "ipv4", 00:11:14.198 "trsvcid": "4420", 00:11:14.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:14.198 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:11:14.198 "method": "bdev_nvme_attach_controller", 00:11:14.198 "req_id": 1 00:11:14.198 } 00:11:14.198 Got JSON-RPC error response 00:11:14.198 response: 00:11:14.198 { 00:11:14.198 "code": -32602, 00:11:14.198 "message": "Invalid parameters" 00:11:14.198 } 00:11:14.198 05:51:35 -- target/tls.sh@36 -- # killprocess 76567 00:11:14.198 05:51:35 -- common/autotest_common.sh@936 -- # '[' -z 76567 ']' 00:11:14.198 05:51:35 -- common/autotest_common.sh@940 -- # kill -0 76567 00:11:14.198 05:51:35 -- common/autotest_common.sh@941 -- # uname 00:11:14.198 05:51:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:14.198 05:51:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76567 00:11:14.198 killing process with pid 76567 00:11:14.198 Received shutdown signal, test time was about 10.000000 seconds 00:11:14.198 00:11:14.198 Latency(us) 00:11:14.198 [2024-12-15T05:51:35.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:14.198 [2024-12-15T05:51:35.839Z] =================================================================================================================== 00:11:14.198 [2024-12-15T05:51:35.839Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:14.198 05:51:35 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:14.198 05:51:35 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:14.198 05:51:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76567' 00:11:14.198 05:51:35 -- common/autotest_common.sh@955 -- # kill 76567 00:11:14.198 05:51:35 -- common/autotest_common.sh@960 -- # wait 76567 00:11:14.457 05:51:35 -- target/tls.sh@37 -- # return 1 00:11:14.457 05:51:35 -- common/autotest_common.sh@653 -- # es=1 00:11:14.457 05:51:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:14.457 05:51:35 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:14.457 05:51:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:14.457 05:51:35 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:14.457 05:51:35 -- common/autotest_common.sh@650 -- # local es=0 00:11:14.457 05:51:35 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:14.457 05:51:35 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:14.457 05:51:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:14.457 05:51:35 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:14.457 05:51:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:14.457 05:51:35 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:14.457 05:51:35 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:14.457 05:51:35 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:11:14.457 05:51:35 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:14.457 05:51:35 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:11:14.458 05:51:35 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:14.458 05:51:35 -- target/tls.sh@28 -- # bdevperf_pid=76595 00:11:14.458 05:51:35 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:14.458 05:51:35 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:14.458 05:51:35 -- target/tls.sh@31 -- # waitforlisten 76595 /var/tmp/bdevperf.sock 00:11:14.458 05:51:35 -- common/autotest_common.sh@829 -- # '[' -z 76595 ']' 00:11:14.458 05:51:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:14.458 05:51:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:14.458 05:51:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:14.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:14.458 05:51:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:14.458 05:51:35 -- common/autotest_common.sh@10 -- # set +x 00:11:14.458 [2024-12-15 05:51:35.928683] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:14.458 [2024-12-15 05:51:35.929027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76595 ] 00:11:14.458 [2024-12-15 05:51:36.068280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.717 [2024-12-15 05:51:36.102810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.285 05:51:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:15.285 05:51:36 -- common/autotest_common.sh@862 -- # return 0 00:11:15.285 05:51:36 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:15.544 [2024-12-15 05:51:37.134828] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:15.544 [2024-12-15 05:51:37.144709] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:11:15.544 [2024-12-15 05:51:37.144747] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:11:15.544 [2024-12-15 05:51:37.144809] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:15.544 [2024-12-15 05:51:37.145221] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf50b80 (107): Transport endpoint is not connected 00:11:15.544 [2024-12-15 05:51:37.146211] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf50b80 (9): Bad file descriptor 00:11:15.544 [2024-12-15 05:51:37.147208] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:11:15.544 [2024-12-15 05:51:37.147271] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:15.544 [2024-12-15 05:51:37.147282] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:11:15.544 request: 00:11:15.544 { 00:11:15.544 "name": "TLSTEST", 00:11:15.544 "trtype": "tcp", 00:11:15.544 "traddr": "10.0.0.2", 00:11:15.544 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:15.544 "adrfam": "ipv4", 00:11:15.544 "trsvcid": "4420", 00:11:15.544 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:11:15.544 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:11:15.544 "method": "bdev_nvme_attach_controller", 00:11:15.544 "req_id": 1 00:11:15.544 } 00:11:15.544 Got JSON-RPC error response 00:11:15.544 response: 00:11:15.544 { 00:11:15.544 "code": -32602, 00:11:15.544 "message": "Invalid parameters" 00:11:15.544 } 00:11:15.544 05:51:37 -- target/tls.sh@36 -- # killprocess 76595 00:11:15.544 05:51:37 -- common/autotest_common.sh@936 -- # '[' -z 76595 ']' 00:11:15.544 05:51:37 -- common/autotest_common.sh@940 -- # kill -0 76595 00:11:15.544 05:51:37 -- common/autotest_common.sh@941 -- # uname 00:11:15.544 05:51:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:15.544 05:51:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76595 00:11:15.804 killing process with pid 76595 00:11:15.804 Received shutdown signal, test time was about 10.000000 seconds 00:11:15.804 00:11:15.804 Latency(us) 00:11:15.804 [2024-12-15T05:51:37.445Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:15.804 [2024-12-15T05:51:37.445Z] =================================================================================================================== 00:11:15.804 [2024-12-15T05:51:37.445Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:15.804 05:51:37 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:15.804 05:51:37 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:15.804 05:51:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76595' 00:11:15.804 05:51:37 -- common/autotest_common.sh@955 -- # kill 76595 00:11:15.804 05:51:37 -- common/autotest_common.sh@960 -- # wait 76595 00:11:15.804 05:51:37 -- target/tls.sh@37 -- # return 1 00:11:15.804 05:51:37 -- common/autotest_common.sh@653 -- # es=1 00:11:15.804 05:51:37 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:15.804 05:51:37 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:15.804 05:51:37 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:15.804 05:51:37 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:15.804 05:51:37 -- common/autotest_common.sh@650 -- # local es=0 00:11:15.804 05:51:37 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:15.804 05:51:37 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:15.804 05:51:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:15.804 05:51:37 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:15.804 05:51:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:15.804 05:51:37 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:15.804 05:51:37 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:15.804 05:51:37 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:15.804 05:51:37 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:15.804 05:51:37 -- target/tls.sh@23 -- # psk= 00:11:15.804 05:51:37 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:15.804 05:51:37 -- target/tls.sh@28 -- # bdevperf_pid=76617 00:11:15.804 05:51:37 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:15.804 05:51:37 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:15.804 05:51:37 -- target/tls.sh@31 -- # waitforlisten 76617 /var/tmp/bdevperf.sock 00:11:15.804 05:51:37 -- common/autotest_common.sh@829 -- # '[' -z 76617 ']' 00:11:15.804 05:51:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:15.804 05:51:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:15.804 05:51:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:15.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:15.804 05:51:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:15.804 05:51:37 -- common/autotest_common.sh@10 -- # set +x 00:11:15.804 [2024-12-15 05:51:37.379793] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:15.804 [2024-12-15 05:51:37.380094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76617 ] 00:11:16.063 [2024-12-15 05:51:37.515656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.063 [2024-12-15 05:51:37.550998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.012 05:51:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:17.012 05:51:38 -- common/autotest_common.sh@862 -- # return 0 00:11:17.012 05:51:38 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:11:17.012 [2024-12-15 05:51:38.606283] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:17.012 [2024-12-15 05:51:38.608238] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2471450 (9): Bad file descriptor 00:11:17.012 [2024-12-15 05:51:38.609233] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:11:17.012 [2024-12-15 05:51:38.609253] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:17.012 [2024-12-15 05:51:38.609263] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:17.012 request: 00:11:17.012 { 00:11:17.012 "name": "TLSTEST", 00:11:17.012 "trtype": "tcp", 00:11:17.012 "traddr": "10.0.0.2", 00:11:17.012 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:17.012 "adrfam": "ipv4", 00:11:17.012 "trsvcid": "4420", 00:11:17.012 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:17.012 "method": "bdev_nvme_attach_controller", 00:11:17.012 "req_id": 1 00:11:17.012 } 00:11:17.012 Got JSON-RPC error response 00:11:17.012 response: 00:11:17.012 { 00:11:17.012 "code": -32602, 00:11:17.012 "message": "Invalid parameters" 00:11:17.012 } 00:11:17.012 05:51:38 -- target/tls.sh@36 -- # killprocess 76617 00:11:17.012 05:51:38 -- common/autotest_common.sh@936 -- # '[' -z 76617 ']' 00:11:17.012 05:51:38 -- common/autotest_common.sh@940 -- # kill -0 76617 00:11:17.012 05:51:38 -- common/autotest_common.sh@941 -- # uname 00:11:17.012 05:51:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:17.012 05:51:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76617 00:11:17.315 killing process with pid 76617 00:11:17.315 Received shutdown signal, test time was about 10.000000 seconds 00:11:17.315 00:11:17.315 Latency(us) 00:11:17.315 [2024-12-15T05:51:38.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:17.315 [2024-12-15T05:51:38.956Z] =================================================================================================================== 00:11:17.315 [2024-12-15T05:51:38.956Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:17.315 05:51:38 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:17.315 05:51:38 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:17.315 05:51:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76617' 00:11:17.315 05:51:38 -- common/autotest_common.sh@955 -- # kill 76617 00:11:17.315 05:51:38 -- common/autotest_common.sh@960 -- # wait 76617 00:11:17.315 05:51:38 -- target/tls.sh@37 -- # return 1 00:11:17.315 05:51:38 -- common/autotest_common.sh@653 -- # es=1 00:11:17.315 05:51:38 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:17.315 05:51:38 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:17.315 05:51:38 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:17.315 05:51:38 -- target/tls.sh@167 -- # killprocess 76180 00:11:17.315 05:51:38 -- common/autotest_common.sh@936 -- # '[' -z 76180 ']' 00:11:17.315 05:51:38 -- common/autotest_common.sh@940 -- # kill -0 76180 00:11:17.315 05:51:38 -- common/autotest_common.sh@941 -- # uname 00:11:17.315 05:51:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:17.315 05:51:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76180 00:11:17.315 killing process with pid 76180 00:11:17.315 05:51:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:17.315 05:51:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:17.315 05:51:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76180' 00:11:17.315 05:51:38 -- common/autotest_common.sh@955 -- # kill 76180 00:11:17.315 05:51:38 -- common/autotest_common.sh@960 -- # wait 76180 00:11:17.582 05:51:38 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:11:17.582 05:51:38 -- target/tls.sh@49 -- # local key hash crc 00:11:17.582 05:51:38 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:11:17.582 05:51:38 -- target/tls.sh@51 -- # hash=02 00:11:17.582 05:51:38 -- target/tls.sh@52 -- # gzip -1 -c 00:11:17.582 05:51:38 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:11:17.582 05:51:38 -- target/tls.sh@52 -- # tail -c8 00:11:17.582 05:51:38 -- target/tls.sh@52 -- # head -c 4 00:11:17.582 05:51:38 -- target/tls.sh@52 -- # crc='�e�'\''' 00:11:17.582 05:51:38 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:11:17.582 05:51:38 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:11:17.582 05:51:38 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:11:17.582 05:51:38 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:11:17.582 05:51:38 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:17.582 05:51:38 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:11:17.582 05:51:38 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:17.582 05:51:38 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:11:17.582 05:51:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:17.582 05:51:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:17.582 05:51:38 -- common/autotest_common.sh@10 -- # set +x 00:11:17.582 05:51:38 -- nvmf/common.sh@469 -- # nvmfpid=76665 00:11:17.582 05:51:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:17.582 05:51:38 -- nvmf/common.sh@470 -- # waitforlisten 76665 00:11:17.582 05:51:38 -- common/autotest_common.sh@829 -- # '[' -z 76665 ']' 00:11:17.582 05:51:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.582 05:51:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:17.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.582 05:51:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.582 05:51:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:17.582 05:51:38 -- common/autotest_common.sh@10 -- # set +x 00:11:17.582 [2024-12-15 05:51:39.034078] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:17.582 [2024-12-15 05:51:39.034349] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.582 [2024-12-15 05:51:39.165399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.582 [2024-12-15 05:51:39.196435] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:17.582 [2024-12-15 05:51:39.196584] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.582 [2024-12-15 05:51:39.196598] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.582 [2024-12-15 05:51:39.196606] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.582 [2024-12-15 05:51:39.196636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.518 05:51:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:18.518 05:51:39 -- common/autotest_common.sh@862 -- # return 0 00:11:18.518 05:51:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:18.518 05:51:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:18.518 05:51:39 -- common/autotest_common.sh@10 -- # set +x 00:11:18.518 05:51:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.518 05:51:40 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:18.518 05:51:40 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:18.518 05:51:40 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:18.776 [2024-12-15 05:51:40.268768] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:18.777 05:51:40 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:19.035 05:51:40 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:19.293 [2024-12-15 05:51:40.768978] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:19.293 [2024-12-15 05:51:40.769179] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.293 05:51:40 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:19.552 malloc0 00:11:19.552 05:51:41 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:19.810 05:51:41 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:20.069 05:51:41 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:20.069 05:51:41 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:20.069 05:51:41 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:20.069 05:51:41 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:20.069 05:51:41 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:11:20.069 05:51:41 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:20.069 05:51:41 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:20.069 05:51:41 -- target/tls.sh@28 -- # bdevperf_pid=76714 00:11:20.069 05:51:41 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:20.069 05:51:41 -- target/tls.sh@31 -- # waitforlisten 76714 /var/tmp/bdevperf.sock 00:11:20.069 05:51:41 -- common/autotest_common.sh@829 -- # '[' -z 76714 ']' 00:11:20.069 05:51:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:20.069 05:51:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:20.069 05:51:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:20.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:20.069 05:51:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:20.069 05:51:41 -- common/autotest_common.sh@10 -- # set +x 00:11:20.069 [2024-12-15 05:51:41.547262] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:20.069 [2024-12-15 05:51:41.547673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76714 ] 00:11:20.069 [2024-12-15 05:51:41.683741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.327 [2024-12-15 05:51:41.723213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.894 05:51:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:20.894 05:51:42 -- common/autotest_common.sh@862 -- # return 0 00:11:20.894 05:51:42 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:21.152 [2024-12-15 05:51:42.653636] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:21.152 TLSTESTn1 00:11:21.152 05:51:42 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:21.411 Running I/O for 10 seconds... 00:11:31.387 00:11:31.387 Latency(us) 00:11:31.387 [2024-12-15T05:51:53.028Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:31.387 [2024-12-15T05:51:53.028Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:31.387 Verification LBA range: start 0x0 length 0x2000 00:11:31.387 TLSTESTn1 : 10.02 5898.59 23.04 0.00 0.00 21663.06 5749.29 21686.46 00:11:31.387 [2024-12-15T05:51:53.028Z] =================================================================================================================== 00:11:31.387 [2024-12-15T05:51:53.028Z] Total : 5898.59 23.04 0.00 0.00 21663.06 5749.29 21686.46 00:11:31.387 0 00:11:31.387 05:51:52 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:31.387 05:51:52 -- target/tls.sh@45 -- # killprocess 76714 00:11:31.387 05:51:52 -- common/autotest_common.sh@936 -- # '[' -z 76714 ']' 00:11:31.387 05:51:52 -- common/autotest_common.sh@940 -- # kill -0 76714 00:11:31.387 05:51:52 -- common/autotest_common.sh@941 -- # uname 00:11:31.387 05:51:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:31.387 05:51:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76714 00:11:31.387 killing process with pid 76714 00:11:31.387 Received shutdown signal, test time was about 10.000000 seconds 00:11:31.387 00:11:31.387 Latency(us) 00:11:31.387 [2024-12-15T05:51:53.028Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:31.387 [2024-12-15T05:51:53.028Z] =================================================================================================================== 00:11:31.387 [2024-12-15T05:51:53.028Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:31.387 05:51:52 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:31.387 05:51:52 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:31.387 05:51:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76714' 00:11:31.387 05:51:52 -- common/autotest_common.sh@955 -- # kill 76714 00:11:31.387 05:51:52 -- common/autotest_common.sh@960 -- # wait 76714 00:11:31.646 05:51:53 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:31.646 05:51:53 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:31.646 05:51:53 -- common/autotest_common.sh@650 -- # local es=0 00:11:31.646 05:51:53 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:31.646 05:51:53 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:31.646 05:51:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:31.646 05:51:53 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:31.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:31.646 05:51:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:31.646 05:51:53 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:31.646 05:51:53 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:31.646 05:51:53 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:31.646 05:51:53 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:31.646 05:51:53 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:11:31.646 05:51:53 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:31.646 05:51:53 -- target/tls.sh@28 -- # bdevperf_pid=76850 00:11:31.646 05:51:53 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:31.646 05:51:53 -- target/tls.sh@31 -- # waitforlisten 76850 /var/tmp/bdevperf.sock 00:11:31.646 05:51:53 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:31.646 05:51:53 -- common/autotest_common.sh@829 -- # '[' -z 76850 ']' 00:11:31.646 05:51:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:31.646 05:51:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:31.646 05:51:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:31.646 05:51:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:31.646 05:51:53 -- common/autotest_common.sh@10 -- # set +x 00:11:31.646 [2024-12-15 05:51:53.152328] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:31.646 [2024-12-15 05:51:53.152635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76850 ] 00:11:31.974 [2024-12-15 05:51:53.292691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.974 [2024-12-15 05:51:53.324562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.541 05:51:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:32.541 05:51:54 -- common/autotest_common.sh@862 -- # return 0 00:11:32.541 05:51:54 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:33.109 [2024-12-15 05:51:54.455054] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:33.109 [2024-12-15 05:51:54.455357] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:11:33.109 request: 00:11:33.109 { 00:11:33.109 "name": "TLSTEST", 00:11:33.109 "trtype": "tcp", 00:11:33.109 "traddr": "10.0.0.2", 00:11:33.109 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:33.109 "adrfam": "ipv4", 00:11:33.109 "trsvcid": "4420", 00:11:33.109 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:33.109 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:33.109 "method": "bdev_nvme_attach_controller", 00:11:33.109 "req_id": 1 00:11:33.109 } 00:11:33.109 Got JSON-RPC error response 00:11:33.109 response: 00:11:33.109 { 00:11:33.109 "code": -22, 00:11:33.109 "message": "Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:11:33.109 } 00:11:33.109 05:51:54 -- target/tls.sh@36 -- # killprocess 76850 00:11:33.109 05:51:54 -- common/autotest_common.sh@936 -- # '[' -z 76850 ']' 00:11:33.109 05:51:54 -- common/autotest_common.sh@940 -- # kill -0 76850 00:11:33.109 05:51:54 -- common/autotest_common.sh@941 -- # uname 00:11:33.109 05:51:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:33.109 05:51:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76850 00:11:33.109 killing process with pid 76850 00:11:33.109 Received shutdown signal, test time was about 10.000000 seconds 00:11:33.109 00:11:33.109 Latency(us) 00:11:33.109 [2024-12-15T05:51:54.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:33.109 [2024-12-15T05:51:54.750Z] =================================================================================================================== 00:11:33.109 [2024-12-15T05:51:54.750Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:33.109 05:51:54 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:33.109 05:51:54 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:33.109 05:51:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76850' 00:11:33.109 05:51:54 -- common/autotest_common.sh@955 -- # kill 76850 00:11:33.109 05:51:54 -- common/autotest_common.sh@960 -- # wait 76850 00:11:33.109 05:51:54 -- target/tls.sh@37 -- # return 1 00:11:33.110 05:51:54 -- common/autotest_common.sh@653 -- # es=1 00:11:33.110 05:51:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:33.110 05:51:54 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:33.110 05:51:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:33.110 05:51:54 -- target/tls.sh@183 -- # killprocess 76665 00:11:33.110 05:51:54 -- common/autotest_common.sh@936 -- # '[' -z 76665 ']' 00:11:33.110 05:51:54 -- common/autotest_common.sh@940 -- # kill -0 76665 00:11:33.110 05:51:54 -- common/autotest_common.sh@941 -- # uname 00:11:33.110 05:51:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:33.110 05:51:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76665 00:11:33.110 killing process with pid 76665 00:11:33.110 05:51:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:33.110 05:51:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:33.110 05:51:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76665' 00:11:33.110 05:51:54 -- common/autotest_common.sh@955 -- # kill 76665 00:11:33.110 05:51:54 -- common/autotest_common.sh@960 -- # wait 76665 00:11:33.369 05:51:54 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:11:33.369 05:51:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:33.369 05:51:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:33.369 05:51:54 -- common/autotest_common.sh@10 -- # set +x 00:11:33.369 05:51:54 -- nvmf/common.sh@469 -- # nvmfpid=76883 00:11:33.369 05:51:54 -- nvmf/common.sh@470 -- # waitforlisten 76883 00:11:33.369 05:51:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:33.369 05:51:54 -- common/autotest_common.sh@829 -- # '[' -z 76883 ']' 00:11:33.369 05:51:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.369 05:51:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:33.369 05:51:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.369 05:51:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:33.369 05:51:54 -- common/autotest_common.sh@10 -- # set +x 00:11:33.369 [2024-12-15 05:51:54.869130] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:33.369 [2024-12-15 05:51:54.869450] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.628 [2024-12-15 05:51:55.011119] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.628 [2024-12-15 05:51:55.050725] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:33.628 [2024-12-15 05:51:55.050911] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.629 [2024-12-15 05:51:55.050929] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.629 [2024-12-15 05:51:55.050941] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.629 [2024-12-15 05:51:55.050988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.196 05:51:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:34.196 05:51:55 -- common/autotest_common.sh@862 -- # return 0 00:11:34.196 05:51:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:34.196 05:51:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:34.196 05:51:55 -- common/autotest_common.sh@10 -- # set +x 00:11:34.455 05:51:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.455 05:51:55 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:34.455 05:51:55 -- common/autotest_common.sh@650 -- # local es=0 00:11:34.455 05:51:55 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:34.455 05:51:55 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:11:34.455 05:51:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:34.455 05:51:55 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:11:34.455 05:51:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:34.455 05:51:55 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:34.455 05:51:55 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:34.455 05:51:55 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:34.455 [2024-12-15 05:51:56.087638] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:34.714 05:51:56 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:34.973 05:51:56 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:34.973 [2024-12-15 05:51:56.603913] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:34.973 [2024-12-15 05:51:56.604141] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.231 05:51:56 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:35.231 malloc0 00:11:35.231 05:51:56 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:35.489 05:51:57 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:35.748 [2024-12-15 05:51:57.314030] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:11:35.748 [2024-12-15 05:51:57.314067] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:11:35.748 [2024-12-15 05:51:57.314100] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:11:35.748 request: 00:11:35.748 { 00:11:35.748 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:35.748 "host": "nqn.2016-06.io.spdk:host1", 00:11:35.748 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:35.748 "method": "nvmf_subsystem_add_host", 00:11:35.748 "req_id": 1 00:11:35.748 } 00:11:35.748 Got JSON-RPC error response 00:11:35.748 response: 00:11:35.748 { 00:11:35.748 "code": -32603, 00:11:35.748 "message": "Internal error" 00:11:35.748 } 00:11:35.748 05:51:57 -- common/autotest_common.sh@653 -- # es=1 00:11:35.748 05:51:57 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:35.748 05:51:57 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:35.748 05:51:57 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:35.748 05:51:57 -- target/tls.sh@189 -- # killprocess 76883 00:11:35.748 05:51:57 -- common/autotest_common.sh@936 -- # '[' -z 76883 ']' 00:11:35.748 05:51:57 -- common/autotest_common.sh@940 -- # kill -0 76883 00:11:35.748 05:51:57 -- common/autotest_common.sh@941 -- # uname 00:11:35.748 05:51:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:35.748 05:51:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76883 00:11:35.748 killing process with pid 76883 00:11:35.748 05:51:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:35.748 05:51:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:35.748 05:51:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76883' 00:11:35.748 05:51:57 -- common/autotest_common.sh@955 -- # kill 76883 00:11:35.748 05:51:57 -- common/autotest_common.sh@960 -- # wait 76883 00:11:36.007 05:51:57 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:36.007 05:51:57 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:11:36.007 05:51:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:36.007 05:51:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:36.007 05:51:57 -- common/autotest_common.sh@10 -- # set +x 00:11:36.007 05:51:57 -- nvmf/common.sh@469 -- # nvmfpid=76953 00:11:36.007 05:51:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:36.007 05:51:57 -- nvmf/common.sh@470 -- # waitforlisten 76953 00:11:36.007 05:51:57 -- common/autotest_common.sh@829 -- # '[' -z 76953 ']' 00:11:36.007 05:51:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.007 05:51:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:36.007 05:51:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.007 05:51:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:36.007 05:51:57 -- common/autotest_common.sh@10 -- # set +x 00:11:36.007 [2024-12-15 05:51:57.568318] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:36.007 [2024-12-15 05:51:57.568436] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.266 [2024-12-15 05:51:57.707761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.266 [2024-12-15 05:51:57.738946] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:36.266 [2024-12-15 05:51:57.739092] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:36.266 [2024-12-15 05:51:57.739104] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:36.266 [2024-12-15 05:51:57.739112] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:36.266 [2024-12-15 05:51:57.739135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.203 05:51:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:37.203 05:51:58 -- common/autotest_common.sh@862 -- # return 0 00:11:37.203 05:51:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:37.203 05:51:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:37.203 05:51:58 -- common/autotest_common.sh@10 -- # set +x 00:11:37.203 05:51:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:37.203 05:51:58 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:37.203 05:51:58 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:37.203 05:51:58 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:37.203 [2024-12-15 05:51:58.840775] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:37.462 05:51:58 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:37.462 05:51:59 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:37.721 [2024-12-15 05:51:59.268897] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:37.721 [2024-12-15 05:51:59.269436] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.721 05:51:59 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:37.981 malloc0 00:11:37.981 05:51:59 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:38.239 05:51:59 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:38.498 05:52:00 -- target/tls.sh@197 -- # bdevperf_pid=77002 00:11:38.498 05:52:00 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:38.498 05:52:00 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:38.498 05:52:00 -- target/tls.sh@200 -- # waitforlisten 77002 /var/tmp/bdevperf.sock 00:11:38.498 05:52:00 -- common/autotest_common.sh@829 -- # '[' -z 77002 ']' 00:11:38.498 05:52:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:38.498 05:52:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:38.498 05:52:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:38.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:38.498 05:52:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:38.498 05:52:00 -- common/autotest_common.sh@10 -- # set +x 00:11:38.498 [2024-12-15 05:52:00.054425] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:38.498 [2024-12-15 05:52:00.054749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77002 ] 00:11:38.757 [2024-12-15 05:52:00.189321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.757 [2024-12-15 05:52:00.223504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:39.691 05:52:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:39.691 05:52:01 -- common/autotest_common.sh@862 -- # return 0 00:11:39.691 05:52:01 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:39.691 [2024-12-15 05:52:01.319847] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:39.949 TLSTESTn1 00:11:39.949 05:52:01 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:40.209 05:52:01 -- target/tls.sh@205 -- # tgtconf='{ 00:11:40.209 "subsystems": [ 00:11:40.209 { 00:11:40.209 "subsystem": "iobuf", 00:11:40.209 "config": [ 00:11:40.209 { 00:11:40.209 "method": "iobuf_set_options", 00:11:40.209 "params": { 00:11:40.209 "small_pool_count": 8192, 00:11:40.209 "large_pool_count": 1024, 00:11:40.209 "small_bufsize": 8192, 00:11:40.209 "large_bufsize": 135168 00:11:40.209 } 00:11:40.209 } 00:11:40.209 ] 00:11:40.209 }, 00:11:40.209 { 00:11:40.209 "subsystem": "sock", 00:11:40.209 "config": [ 00:11:40.209 { 00:11:40.209 "method": "sock_impl_set_options", 00:11:40.209 "params": { 00:11:40.209 "impl_name": "uring", 00:11:40.209 "recv_buf_size": 2097152, 00:11:40.209 "send_buf_size": 2097152, 00:11:40.209 "enable_recv_pipe": true, 00:11:40.209 "enable_quickack": false, 00:11:40.209 "enable_placement_id": 0, 00:11:40.209 "enable_zerocopy_send_server": false, 00:11:40.209 "enable_zerocopy_send_client": false, 00:11:40.209 "zerocopy_threshold": 0, 00:11:40.209 "tls_version": 0, 00:11:40.209 "enable_ktls": false 00:11:40.209 } 00:11:40.209 }, 00:11:40.209 { 00:11:40.209 "method": "sock_impl_set_options", 00:11:40.209 "params": { 00:11:40.209 "impl_name": "posix", 00:11:40.209 "recv_buf_size": 2097152, 00:11:40.209 "send_buf_size": 2097152, 00:11:40.209 "enable_recv_pipe": true, 00:11:40.209 "enable_quickack": false, 00:11:40.209 "enable_placement_id": 0, 00:11:40.209 "enable_zerocopy_send_server": true, 00:11:40.209 "enable_zerocopy_send_client": false, 00:11:40.209 "zerocopy_threshold": 0, 00:11:40.209 "tls_version": 0, 00:11:40.209 "enable_ktls": false 00:11:40.209 } 00:11:40.209 }, 00:11:40.209 { 00:11:40.209 "method": "sock_impl_set_options", 00:11:40.209 "params": { 00:11:40.209 "impl_name": "ssl", 00:11:40.209 "recv_buf_size": 4096, 00:11:40.209 "send_buf_size": 4096, 00:11:40.209 "enable_recv_pipe": true, 00:11:40.209 "enable_quickack": false, 00:11:40.209 "enable_placement_id": 0, 00:11:40.209 "enable_zerocopy_send_server": true, 00:11:40.209 "enable_zerocopy_send_client": false, 00:11:40.209 "zerocopy_threshold": 0, 00:11:40.209 "tls_version": 0, 00:11:40.209 "enable_ktls": false 00:11:40.209 } 00:11:40.209 } 00:11:40.209 ] 00:11:40.209 }, 00:11:40.209 { 00:11:40.209 "subsystem": "vmd", 00:11:40.209 "config": [] 00:11:40.209 }, 00:11:40.209 { 00:11:40.209 "subsystem": "accel", 00:11:40.209 "config": [ 00:11:40.209 { 00:11:40.209 "method": "accel_set_options", 00:11:40.209 "params": { 00:11:40.209 "small_cache_size": 128, 00:11:40.209 "large_cache_size": 16, 00:11:40.209 "task_count": 2048, 00:11:40.209 "sequence_count": 2048, 00:11:40.209 "buf_count": 2048 00:11:40.209 } 00:11:40.209 } 00:11:40.209 ] 00:11:40.209 }, 00:11:40.209 { 00:11:40.209 "subsystem": "bdev", 00:11:40.209 "config": [ 00:11:40.209 { 00:11:40.209 "method": "bdev_set_options", 00:11:40.209 "params": { 00:11:40.209 "bdev_io_pool_size": 65535, 00:11:40.209 "bdev_io_cache_size": 256, 00:11:40.209 "bdev_auto_examine": true, 00:11:40.209 "iobuf_small_cache_size": 128, 00:11:40.209 "iobuf_large_cache_size": 16 00:11:40.209 } 00:11:40.209 }, 00:11:40.209 { 00:11:40.209 "method": "bdev_raid_set_options", 00:11:40.209 "params": { 00:11:40.209 "process_window_size_kb": 1024 00:11:40.209 } 00:11:40.209 }, 00:11:40.209 { 00:11:40.209 "method": "bdev_iscsi_set_options", 00:11:40.209 "params": { 00:11:40.209 "timeout_sec": 30 00:11:40.209 } 00:11:40.209 }, 00:11:40.209 { 00:11:40.209 "method": "bdev_nvme_set_options", 00:11:40.209 "params": { 00:11:40.209 "action_on_timeout": "none", 00:11:40.209 "timeout_us": 0, 00:11:40.209 "timeout_admin_us": 0, 00:11:40.209 "keep_alive_timeout_ms": 10000, 00:11:40.209 "transport_retry_count": 4, 00:11:40.209 "arbitration_burst": 0, 00:11:40.209 "low_priority_weight": 0, 00:11:40.209 "medium_priority_weight": 0, 00:11:40.209 "high_priority_weight": 0, 00:11:40.209 "nvme_adminq_poll_period_us": 10000, 00:11:40.209 "nvme_ioq_poll_period_us": 0, 00:11:40.209 "io_queue_requests": 0, 00:11:40.209 "delay_cmd_submit": true, 00:11:40.209 "bdev_retry_count": 3, 00:11:40.209 "transport_ack_timeout": 0, 00:11:40.209 "ctrlr_loss_timeout_sec": 0, 00:11:40.209 "reconnect_delay_sec": 0, 00:11:40.209 "fast_io_fail_timeout_sec": 0, 00:11:40.209 "generate_uuids": false, 00:11:40.209 "transport_tos": 0, 00:11:40.209 "io_path_stat": false, 00:11:40.209 "allow_accel_sequence": false 00:11:40.209 } 00:11:40.209 }, 00:11:40.209 { 00:11:40.209 "method": "bdev_nvme_set_hotplug", 00:11:40.209 "params": { 00:11:40.209 "period_us": 100000, 00:11:40.209 "enable": false 00:11:40.209 } 00:11:40.209 }, 00:11:40.209 { 00:11:40.209 "method": "bdev_malloc_create", 00:11:40.209 "params": { 00:11:40.209 "name": "malloc0", 00:11:40.209 "num_blocks": 8192, 00:11:40.209 "block_size": 4096, 00:11:40.209 "physical_block_size": 4096, 00:11:40.209 "uuid": "072f890a-36d7-4c07-8a0b-668f8ed0000b", 00:11:40.209 "optimal_io_boundary": 0 00:11:40.209 } 00:11:40.209 }, 00:11:40.209 { 00:11:40.209 "method": "bdev_wait_for_examine" 00:11:40.209 } 00:11:40.209 ] 00:11:40.209 }, 00:11:40.209 { 00:11:40.209 "subsystem": "nbd", 00:11:40.209 "config": [] 00:11:40.209 }, 00:11:40.209 { 00:11:40.209 "subsystem": "scheduler", 00:11:40.209 "config": [ 00:11:40.209 { 00:11:40.209 "method": "framework_set_scheduler", 00:11:40.209 "params": { 00:11:40.209 "name": "static" 00:11:40.209 } 00:11:40.209 } 00:11:40.209 ] 00:11:40.209 }, 00:11:40.209 { 00:11:40.209 "subsystem": "nvmf", 00:11:40.209 "config": [ 00:11:40.209 { 00:11:40.209 "method": "nvmf_set_config", 00:11:40.209 "params": { 00:11:40.209 "discovery_filter": "match_any", 00:11:40.209 "admin_cmd_passthru": { 00:11:40.209 "identify_ctrlr": false 00:11:40.209 } 00:11:40.209 } 00:11:40.209 }, 00:11:40.209 { 00:11:40.209 "method": "nvmf_set_max_subsystems", 00:11:40.209 "params": { 00:11:40.209 "max_subsystems": 1024 00:11:40.209 } 00:11:40.209 }, 00:11:40.209 { 00:11:40.209 "method": "nvmf_set_crdt", 00:11:40.209 "params": { 00:11:40.209 "crdt1": 0, 00:11:40.209 "crdt2": 0, 00:11:40.209 "crdt3": 0 00:11:40.209 } 00:11:40.209 }, 00:11:40.209 { 00:11:40.209 "method": "nvmf_create_transport", 00:11:40.209 "params": { 00:11:40.209 "trtype": "TCP", 00:11:40.209 "max_queue_depth": 128, 00:11:40.209 "max_io_qpairs_per_ctrlr": 127, 00:11:40.210 "in_capsule_data_size": 4096, 00:11:40.210 "max_io_size": 131072, 00:11:40.210 "io_unit_size": 131072, 00:11:40.210 "max_aq_depth": 128, 00:11:40.210 "num_shared_buffers": 511, 00:11:40.210 "buf_cache_size": 4294967295, 00:11:40.210 "dif_insert_or_strip": false, 00:11:40.210 "zcopy": false, 00:11:40.210 "c2h_success": false, 00:11:40.210 "sock_priority": 0, 00:11:40.210 "abort_timeout_sec": 1 00:11:40.210 } 00:11:40.210 }, 00:11:40.210 { 00:11:40.210 "method": "nvmf_create_subsystem", 00:11:40.210 "params": { 00:11:40.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:40.210 "allow_any_host": false, 00:11:40.210 "serial_number": "SPDK00000000000001", 00:11:40.210 "model_number": "SPDK bdev Controller", 00:11:40.210 "max_namespaces": 10, 00:11:40.210 "min_cntlid": 1, 00:11:40.210 "max_cntlid": 65519, 00:11:40.210 "ana_reporting": false 00:11:40.210 } 00:11:40.210 }, 00:11:40.210 { 00:11:40.210 "method": "nvmf_subsystem_add_host", 00:11:40.210 "params": { 00:11:40.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:40.210 "host": "nqn.2016-06.io.spdk:host1", 00:11:40.210 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:11:40.210 } 00:11:40.210 }, 00:11:40.210 { 00:11:40.210 "method": "nvmf_subsystem_add_ns", 00:11:40.210 "params": { 00:11:40.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:40.210 "namespace": { 00:11:40.210 "nsid": 1, 00:11:40.210 "bdev_name": "malloc0", 00:11:40.210 "nguid": "072F890A36D74C078A0B668F8ED0000B", 00:11:40.210 "uuid": "072f890a-36d7-4c07-8a0b-668f8ed0000b" 00:11:40.210 } 00:11:40.210 } 00:11:40.210 }, 00:11:40.210 { 00:11:40.210 "method": "nvmf_subsystem_add_listener", 00:11:40.210 "params": { 00:11:40.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:40.210 "listen_address": { 00:11:40.210 "trtype": "TCP", 00:11:40.210 "adrfam": "IPv4", 00:11:40.210 "traddr": "10.0.0.2", 00:11:40.210 "trsvcid": "4420" 00:11:40.210 }, 00:11:40.210 "secure_channel": true 00:11:40.210 } 00:11:40.210 } 00:11:40.210 ] 00:11:40.210 } 00:11:40.210 ] 00:11:40.210 }' 00:11:40.210 05:52:01 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:11:40.777 05:52:02 -- target/tls.sh@206 -- # bdevperfconf='{ 00:11:40.777 "subsystems": [ 00:11:40.777 { 00:11:40.777 "subsystem": "iobuf", 00:11:40.777 "config": [ 00:11:40.777 { 00:11:40.777 "method": "iobuf_set_options", 00:11:40.777 "params": { 00:11:40.777 "small_pool_count": 8192, 00:11:40.777 "large_pool_count": 1024, 00:11:40.777 "small_bufsize": 8192, 00:11:40.777 "large_bufsize": 135168 00:11:40.777 } 00:11:40.777 } 00:11:40.777 ] 00:11:40.777 }, 00:11:40.777 { 00:11:40.777 "subsystem": "sock", 00:11:40.777 "config": [ 00:11:40.777 { 00:11:40.777 "method": "sock_impl_set_options", 00:11:40.777 "params": { 00:11:40.777 "impl_name": "uring", 00:11:40.777 "recv_buf_size": 2097152, 00:11:40.777 "send_buf_size": 2097152, 00:11:40.777 "enable_recv_pipe": true, 00:11:40.777 "enable_quickack": false, 00:11:40.777 "enable_placement_id": 0, 00:11:40.777 "enable_zerocopy_send_server": false, 00:11:40.777 "enable_zerocopy_send_client": false, 00:11:40.777 "zerocopy_threshold": 0, 00:11:40.777 "tls_version": 0, 00:11:40.777 "enable_ktls": false 00:11:40.777 } 00:11:40.777 }, 00:11:40.777 { 00:11:40.777 "method": "sock_impl_set_options", 00:11:40.777 "params": { 00:11:40.777 "impl_name": "posix", 00:11:40.777 "recv_buf_size": 2097152, 00:11:40.777 "send_buf_size": 2097152, 00:11:40.777 "enable_recv_pipe": true, 00:11:40.777 "enable_quickack": false, 00:11:40.777 "enable_placement_id": 0, 00:11:40.777 "enable_zerocopy_send_server": true, 00:11:40.777 "enable_zerocopy_send_client": false, 00:11:40.777 "zerocopy_threshold": 0, 00:11:40.777 "tls_version": 0, 00:11:40.777 "enable_ktls": false 00:11:40.777 } 00:11:40.777 }, 00:11:40.777 { 00:11:40.777 "method": "sock_impl_set_options", 00:11:40.777 "params": { 00:11:40.777 "impl_name": "ssl", 00:11:40.777 "recv_buf_size": 4096, 00:11:40.777 "send_buf_size": 4096, 00:11:40.777 "enable_recv_pipe": true, 00:11:40.777 "enable_quickack": false, 00:11:40.777 "enable_placement_id": 0, 00:11:40.777 "enable_zerocopy_send_server": true, 00:11:40.777 "enable_zerocopy_send_client": false, 00:11:40.777 "zerocopy_threshold": 0, 00:11:40.777 "tls_version": 0, 00:11:40.777 "enable_ktls": false 00:11:40.777 } 00:11:40.777 } 00:11:40.777 ] 00:11:40.777 }, 00:11:40.777 { 00:11:40.777 "subsystem": "vmd", 00:11:40.777 "config": [] 00:11:40.777 }, 00:11:40.777 { 00:11:40.777 "subsystem": "accel", 00:11:40.777 "config": [ 00:11:40.777 { 00:11:40.777 "method": "accel_set_options", 00:11:40.777 "params": { 00:11:40.777 "small_cache_size": 128, 00:11:40.777 "large_cache_size": 16, 00:11:40.777 "task_count": 2048, 00:11:40.777 "sequence_count": 2048, 00:11:40.777 "buf_count": 2048 00:11:40.777 } 00:11:40.777 } 00:11:40.777 ] 00:11:40.777 }, 00:11:40.777 { 00:11:40.777 "subsystem": "bdev", 00:11:40.777 "config": [ 00:11:40.777 { 00:11:40.777 "method": "bdev_set_options", 00:11:40.777 "params": { 00:11:40.777 "bdev_io_pool_size": 65535, 00:11:40.777 "bdev_io_cache_size": 256, 00:11:40.777 "bdev_auto_examine": true, 00:11:40.777 "iobuf_small_cache_size": 128, 00:11:40.777 "iobuf_large_cache_size": 16 00:11:40.777 } 00:11:40.777 }, 00:11:40.777 { 00:11:40.777 "method": "bdev_raid_set_options", 00:11:40.777 "params": { 00:11:40.777 "process_window_size_kb": 1024 00:11:40.777 } 00:11:40.777 }, 00:11:40.777 { 00:11:40.777 "method": "bdev_iscsi_set_options", 00:11:40.777 "params": { 00:11:40.777 "timeout_sec": 30 00:11:40.777 } 00:11:40.777 }, 00:11:40.777 { 00:11:40.777 "method": "bdev_nvme_set_options", 00:11:40.777 "params": { 00:11:40.777 "action_on_timeout": "none", 00:11:40.777 "timeout_us": 0, 00:11:40.777 "timeout_admin_us": 0, 00:11:40.777 "keep_alive_timeout_ms": 10000, 00:11:40.777 "transport_retry_count": 4, 00:11:40.777 "arbitration_burst": 0, 00:11:40.777 "low_priority_weight": 0, 00:11:40.777 "medium_priority_weight": 0, 00:11:40.777 "high_priority_weight": 0, 00:11:40.777 "nvme_adminq_poll_period_us": 10000, 00:11:40.777 "nvme_ioq_poll_period_us": 0, 00:11:40.777 "io_queue_requests": 512, 00:11:40.777 "delay_cmd_submit": true, 00:11:40.777 "bdev_retry_count": 3, 00:11:40.777 "transport_ack_timeout": 0, 00:11:40.777 "ctrlr_loss_timeout_sec": 0, 00:11:40.777 "reconnect_delay_sec": 0, 00:11:40.777 "fast_io_fail_timeout_sec": 0, 00:11:40.778 "generate_uuids": false, 00:11:40.778 "transport_tos": 0, 00:11:40.778 "io_path_stat": false, 00:11:40.778 "allow_accel_sequence": false 00:11:40.778 } 00:11:40.778 }, 00:11:40.778 { 00:11:40.778 "method": "bdev_nvme_attach_controller", 00:11:40.778 "params": { 00:11:40.778 "name": "TLSTEST", 00:11:40.778 "trtype": "TCP", 00:11:40.778 "adrfam": "IPv4", 00:11:40.778 "traddr": "10.0.0.2", 00:11:40.778 "trsvcid": "4420", 00:11:40.778 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:40.778 "prchk_reftag": false, 00:11:40.778 "prchk_guard": false, 00:11:40.778 "ctrlr_loss_timeout_sec": 0, 00:11:40.778 "reconnect_delay_sec": 0, 00:11:40.778 "fast_io_fail_timeout_sec": 0, 00:11:40.778 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:40.778 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:40.778 "hdgst": false, 00:11:40.778 "ddgst": false 00:11:40.778 } 00:11:40.778 }, 00:11:40.778 { 00:11:40.778 "method": "bdev_nvme_set_hotplug", 00:11:40.778 "params": { 00:11:40.778 "period_us": 100000, 00:11:40.778 "enable": false 00:11:40.778 } 00:11:40.778 }, 00:11:40.778 { 00:11:40.778 "method": "bdev_wait_for_examine" 00:11:40.778 } 00:11:40.778 ] 00:11:40.778 }, 00:11:40.778 { 00:11:40.778 "subsystem": "nbd", 00:11:40.778 "config": [] 00:11:40.778 } 00:11:40.778 ] 00:11:40.778 }' 00:11:40.778 05:52:02 -- target/tls.sh@208 -- # killprocess 77002 00:11:40.778 05:52:02 -- common/autotest_common.sh@936 -- # '[' -z 77002 ']' 00:11:40.778 05:52:02 -- common/autotest_common.sh@940 -- # kill -0 77002 00:11:40.778 05:52:02 -- common/autotest_common.sh@941 -- # uname 00:11:40.778 05:52:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:40.778 05:52:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77002 00:11:40.778 killing process with pid 77002 00:11:40.778 Received shutdown signal, test time was about 10.000000 seconds 00:11:40.778 00:11:40.778 Latency(us) 00:11:40.778 [2024-12-15T05:52:02.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:40.778 [2024-12-15T05:52:02.419Z] =================================================================================================================== 00:11:40.778 [2024-12-15T05:52:02.419Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:40.778 05:52:02 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:40.778 05:52:02 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:40.778 05:52:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77002' 00:11:40.778 05:52:02 -- common/autotest_common.sh@955 -- # kill 77002 00:11:40.778 05:52:02 -- common/autotest_common.sh@960 -- # wait 77002 00:11:40.778 05:52:02 -- target/tls.sh@209 -- # killprocess 76953 00:11:40.778 05:52:02 -- common/autotest_common.sh@936 -- # '[' -z 76953 ']' 00:11:40.778 05:52:02 -- common/autotest_common.sh@940 -- # kill -0 76953 00:11:40.778 05:52:02 -- common/autotest_common.sh@941 -- # uname 00:11:40.778 05:52:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:40.778 05:52:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76953 00:11:40.778 killing process with pid 76953 00:11:40.778 05:52:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:40.778 05:52:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:40.778 05:52:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76953' 00:11:40.778 05:52:02 -- common/autotest_common.sh@955 -- # kill 76953 00:11:40.778 05:52:02 -- common/autotest_common.sh@960 -- # wait 76953 00:11:41.037 05:52:02 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:11:41.037 05:52:02 -- target/tls.sh@212 -- # echo '{ 00:11:41.037 "subsystems": [ 00:11:41.037 { 00:11:41.037 "subsystem": "iobuf", 00:11:41.037 "config": [ 00:11:41.037 { 00:11:41.037 "method": "iobuf_set_options", 00:11:41.037 "params": { 00:11:41.037 "small_pool_count": 8192, 00:11:41.037 "large_pool_count": 1024, 00:11:41.037 "small_bufsize": 8192, 00:11:41.037 "large_bufsize": 135168 00:11:41.037 } 00:11:41.037 } 00:11:41.037 ] 00:11:41.037 }, 00:11:41.037 { 00:11:41.037 "subsystem": "sock", 00:11:41.037 "config": [ 00:11:41.037 { 00:11:41.037 "method": "sock_impl_set_options", 00:11:41.037 "params": { 00:11:41.037 "impl_name": "uring", 00:11:41.037 "recv_buf_size": 2097152, 00:11:41.037 "send_buf_size": 2097152, 00:11:41.037 "enable_recv_pipe": true, 00:11:41.037 "enable_quickack": false, 00:11:41.037 "enable_placement_id": 0, 00:11:41.037 "enable_zerocopy_send_server": false, 00:11:41.037 "enable_zerocopy_send_client": false, 00:11:41.037 "zerocopy_threshold": 0, 00:11:41.037 "tls_version": 0, 00:11:41.037 "enable_ktls": false 00:11:41.037 } 00:11:41.037 }, 00:11:41.037 { 00:11:41.037 "method": "sock_impl_set_options", 00:11:41.037 "params": { 00:11:41.037 "impl_name": "posix", 00:11:41.037 "recv_buf_size": 2097152, 00:11:41.037 "send_buf_size": 2097152, 00:11:41.037 "enable_recv_pipe": true, 00:11:41.037 "enable_quickack": false, 00:11:41.037 "enable_placement_id": 0, 00:11:41.037 "enable_zerocopy_send_server": true, 00:11:41.037 "enable_zerocopy_send_client": false, 00:11:41.037 "zerocopy_threshold": 0, 00:11:41.037 "tls_version": 0, 00:11:41.037 "enable_ktls": false 00:11:41.037 } 00:11:41.037 }, 00:11:41.037 { 00:11:41.037 "method": "sock_impl_set_options", 00:11:41.037 "params": { 00:11:41.037 "impl_name": "ssl", 00:11:41.037 "recv_buf_size": 4096, 00:11:41.037 "send_buf_size": 4096, 00:11:41.037 "enable_recv_pipe": true, 00:11:41.037 "enable_quickack": false, 00:11:41.037 "enable_placement_id": 0, 00:11:41.037 "enable_zerocopy_send_server": true, 00:11:41.037 "enable_zerocopy_send_client": false, 00:11:41.037 "zerocopy_threshold": 0, 00:11:41.037 "tls_version": 0, 00:11:41.037 "enable_ktls": false 00:11:41.037 } 00:11:41.037 } 00:11:41.037 ] 00:11:41.037 }, 00:11:41.037 { 00:11:41.037 "subsystem": "vmd", 00:11:41.037 "config": [] 00:11:41.037 }, 00:11:41.037 { 00:11:41.037 "subsystem": "accel", 00:11:41.037 "config": [ 00:11:41.037 { 00:11:41.037 "method": "accel_set_options", 00:11:41.037 "params": { 00:11:41.037 "small_cache_size": 128, 00:11:41.037 "large_cache_size": 16, 00:11:41.037 "task_count": 2048, 00:11:41.037 "sequence_count": 2048, 00:11:41.037 "buf_count": 2048 00:11:41.037 } 00:11:41.037 } 00:11:41.037 ] 00:11:41.037 }, 00:11:41.037 { 00:11:41.037 "subsystem": "bdev", 00:11:41.037 "config": [ 00:11:41.037 { 00:11:41.037 "method": "bdev_set_options", 00:11:41.037 "params": { 00:11:41.037 "bdev_io_pool_size": 65535, 00:11:41.037 "bdev_io_cache_size": 256, 00:11:41.037 "bdev_auto_examine": true, 00:11:41.037 "iobuf_small_cache_size": 128, 00:11:41.037 "iobuf_large_cache_size": 16 00:11:41.037 } 00:11:41.037 }, 00:11:41.037 { 00:11:41.037 "method": "bdev_raid_set_options", 00:11:41.037 "params": { 00:11:41.038 "process_window_size_kb": 1024 00:11:41.038 } 00:11:41.038 }, 00:11:41.038 { 00:11:41.038 "method": "bdev_iscsi_set_options", 00:11:41.038 "params": { 00:11:41.038 "timeout_sec": 30 00:11:41.038 } 00:11:41.038 }, 00:11:41.038 { 00:11:41.038 "method": "bdev_nvme_set_options", 00:11:41.038 "params": { 00:11:41.038 "action_on_timeout": "none", 00:11:41.038 "timeout_us": 0, 00:11:41.038 "timeout_admin_us": 0, 00:11:41.038 "keep_alive_timeout_ms": 10000, 00:11:41.038 "transport_retry_count": 4, 00:11:41.038 "arbitration_burst": 0, 00:11:41.038 "low_priority_weight": 0, 00:11:41.038 "medium_priority_weight": 0, 00:11:41.038 "high_priority_weight": 0, 00:11:41.038 "nvme_adminq_poll_period_us": 10000, 00:11:41.038 "nvme_ioq_poll_period_us": 0, 00:11:41.038 "io_queue_requests": 0, 00:11:41.038 "delay_cmd_submit": true, 00:11:41.038 "bdev_retry_count": 3, 00:11:41.038 "transport_ack_timeout": 0, 00:11:41.038 "ctrlr_loss_timeout_sec": 0, 00:11:41.038 "reconnect_delay_sec": 0, 00:11:41.038 "fast_io_fail_timeout_sec": 0, 00:11:41.038 "generate_uuids": false, 00:11:41.038 "transport_tos": 0, 00:11:41.038 "io_path_stat": false, 00:11:41.038 "allow_accel_sequence": false 00:11:41.038 } 00:11:41.038 }, 00:11:41.038 { 00:11:41.038 "method": "bdev_nvme_set_hotplug", 00:11:41.038 "params": { 00:11:41.038 "period_us": 100000, 00:11:41.038 "enable": false 00:11:41.038 } 00:11:41.038 }, 00:11:41.038 { 00:11:41.038 "method": "bdev_malloc_create", 00:11:41.038 "params": { 00:11:41.038 "name": "malloc0", 00:11:41.038 "num_blocks": 8192, 00:11:41.038 "block_size": 4096, 00:11:41.038 "physical_block_size": 4096, 00:11:41.038 "uuid": "072f890a-36d7-4c07-8a0b-668f8ed0000b", 00:11:41.038 "optimal_io_boundary": 0 00:11:41.038 } 00:11:41.038 }, 00:11:41.038 { 00:11:41.038 "method": "bdev_wait_for_examine" 00:11:41.038 } 00:11:41.038 ] 00:11:41.038 }, 00:11:41.038 { 00:11:41.038 "subsystem": "nbd", 00:11:41.038 "config": [] 00:11:41.038 }, 00:11:41.038 { 00:11:41.038 "subsystem": "scheduler", 00:11:41.038 "config": [ 00:11:41.038 { 00:11:41.038 "method": "framework_set_scheduler", 00:11:41.038 "params": { 00:11:41.038 "name": "static" 00:11:41.038 } 00:11:41.038 } 00:11:41.038 ] 00:11:41.038 }, 00:11:41.038 { 00:11:41.038 "subsystem": "nvmf", 00:11:41.038 "config": [ 00:11:41.038 { 00:11:41.038 "method": "nvmf_set_config", 00:11:41.038 "params": { 00:11:41.038 "discovery_filter": "match_any", 00:11:41.038 "admin_cmd_passthru": { 00:11:41.038 "identify_ctrlr": false 00:11:41.038 } 00:11:41.038 } 00:11:41.038 }, 00:11:41.038 { 00:11:41.038 "method": "nvmf_set_max_subsystems", 00:11:41.038 "params": { 00:11:41.038 "max_subsystems": 1024 00:11:41.038 } 00:11:41.038 }, 00:11:41.038 { 00:11:41.038 "method": "nvmf_set_crdt", 00:11:41.038 "params": { 00:11:41.038 "crdt1": 0, 00:11:41.038 "crdt2": 0, 00:11:41.038 "crdt3": 0 00:11:41.038 } 00:11:41.038 }, 00:11:41.038 { 00:11:41.038 "method": "nvmf_create_transport", 00:11:41.038 "params": { 00:11:41.038 "trtype": "TCP", 00:11:41.038 "max_queue_depth": 128, 00:11:41.038 "max_io_qpairs_per_ctrlr": 127, 00:11:41.038 "in_capsule_data_size": 4096, 00:11:41.038 "max_io_size": 131072, 00:11:41.038 "io_unit_size": 131072, 00:11:41.038 "max_aq_depth": 128, 00:11:41.038 "num_shared_buffers": 511, 00:11:41.038 "buf_cache_size": 4294967295, 00:11:41.038 "dif_insert_or_strip": false, 00:11:41.038 "zcopy": false, 00:11:41.038 "c2h_success": false, 00:11:41.038 "sock_priority": 0, 00:11:41.038 "abort_timeout_sec": 1 00:11:41.038 } 00:11:41.038 }, 00:11:41.038 { 00:11:41.038 "method": "nvmf_create_subsystem", 00:11:41.038 "params": { 00:11:41.038 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:41.038 "allow_any_host": false, 00:11:41.038 "serial_number": "SPDK00000000000001", 00:11:41.038 "model_number": "SPDK bdev Controller", 00:11:41.038 "max_namespaces": 10, 00:11:41.038 "min_cntlid": 1, 00:11:41.038 "max_cntlid": 65519, 00:11:41.038 "ana_reporting": false 00:11:41.038 } 00:11:41.038 }, 00:11:41.038 { 00:11:41.038 "method": "nvmf_subsystem_add_host", 00:11:41.038 "params": { 00:11:41.038 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:41.038 "host": "nqn.2016-06.io.spdk:host1", 00:11:41.038 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:11:41.038 } 00:11:41.038 }, 00:11:41.038 { 00:11:41.038 "method": "nvmf_subsystem_add_ns", 00:11:41.038 "params": { 00:11:41.038 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:41.038 "namespace": { 00:11:41.038 "nsid": 1, 00:11:41.038 "bdev_name": "malloc0", 00:11:41.038 "nguid": "072F890A36D74C078A0B668F8ED0000B", 00:11:41.038 "uuid": "072f890a-36d7-4c07-8a0b-668f8ed0000b" 00:11:41.038 } 00:11:41.038 } 00:11:41.038 }, 00:11:41.038 { 00:11:41.038 "method": "nvmf_subsystem_add_listener", 00:11:41.038 "params": { 00:11:41.038 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:41.038 "listen_address": { 00:11:41.038 "trtype": "TCP", 00:11:41.038 "adrfam": "IPv4", 00:11:41.038 "traddr": "10.0.0.2", 00:11:41.038 "trsvcid": "4420" 00:11:41.038 }, 00:11:41.038 "secure_channel": true 00:11:41.038 } 00:11:41.038 } 00:11:41.038 ] 00:11:41.038 } 00:11:41.038 ] 00:11:41.038 }' 00:11:41.038 05:52:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:41.038 05:52:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:41.038 05:52:02 -- common/autotest_common.sh@10 -- # set +x 00:11:41.038 05:52:02 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:11:41.038 05:52:02 -- nvmf/common.sh@469 -- # nvmfpid=77052 00:11:41.038 05:52:02 -- nvmf/common.sh@470 -- # waitforlisten 77052 00:11:41.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.038 05:52:02 -- common/autotest_common.sh@829 -- # '[' -z 77052 ']' 00:11:41.038 05:52:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.038 05:52:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:41.038 05:52:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.038 05:52:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:41.038 05:52:02 -- common/autotest_common.sh@10 -- # set +x 00:11:41.038 [2024-12-15 05:52:02.514019] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:41.038 [2024-12-15 05:52:02.514312] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.038 [2024-12-15 05:52:02.647346] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.297 [2024-12-15 05:52:02.680269] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:41.297 [2024-12-15 05:52:02.680546] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.297 [2024-12-15 05:52:02.680659] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.297 [2024-12-15 05:52:02.680784] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.297 [2024-12-15 05:52:02.681005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.297 [2024-12-15 05:52:02.859334] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:41.297 [2024-12-15 05:52:02.891289] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:41.297 [2024-12-15 05:52:02.891654] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.232 05:52:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:42.232 05:52:03 -- common/autotest_common.sh@862 -- # return 0 00:11:42.232 05:52:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:42.232 05:52:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:42.232 05:52:03 -- common/autotest_common.sh@10 -- # set +x 00:11:42.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:42.232 05:52:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.232 05:52:03 -- target/tls.sh@216 -- # bdevperf_pid=77084 00:11:42.232 05:52:03 -- target/tls.sh@217 -- # waitforlisten 77084 /var/tmp/bdevperf.sock 00:11:42.232 05:52:03 -- common/autotest_common.sh@829 -- # '[' -z 77084 ']' 00:11:42.232 05:52:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:42.232 05:52:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:42.232 05:52:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:42.232 05:52:03 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:11:42.232 05:52:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:42.232 05:52:03 -- common/autotest_common.sh@10 -- # set +x 00:11:42.232 05:52:03 -- target/tls.sh@213 -- # echo '{ 00:11:42.232 "subsystems": [ 00:11:42.232 { 00:11:42.232 "subsystem": "iobuf", 00:11:42.232 "config": [ 00:11:42.232 { 00:11:42.232 "method": "iobuf_set_options", 00:11:42.232 "params": { 00:11:42.232 "small_pool_count": 8192, 00:11:42.232 "large_pool_count": 1024, 00:11:42.232 "small_bufsize": 8192, 00:11:42.232 "large_bufsize": 135168 00:11:42.232 } 00:11:42.232 } 00:11:42.232 ] 00:11:42.232 }, 00:11:42.232 { 00:11:42.232 "subsystem": "sock", 00:11:42.232 "config": [ 00:11:42.232 { 00:11:42.232 "method": "sock_impl_set_options", 00:11:42.232 "params": { 00:11:42.232 "impl_name": "uring", 00:11:42.232 "recv_buf_size": 2097152, 00:11:42.232 "send_buf_size": 2097152, 00:11:42.232 "enable_recv_pipe": true, 00:11:42.232 "enable_quickack": false, 00:11:42.232 "enable_placement_id": 0, 00:11:42.232 "enable_zerocopy_send_server": false, 00:11:42.232 "enable_zerocopy_send_client": false, 00:11:42.232 "zerocopy_threshold": 0, 00:11:42.232 "tls_version": 0, 00:11:42.232 "enable_ktls": false 00:11:42.232 } 00:11:42.232 }, 00:11:42.232 { 00:11:42.232 "method": "sock_impl_set_options", 00:11:42.232 "params": { 00:11:42.232 "impl_name": "posix", 00:11:42.232 "recv_buf_size": 2097152, 00:11:42.232 "send_buf_size": 2097152, 00:11:42.232 "enable_recv_pipe": true, 00:11:42.232 "enable_quickack": false, 00:11:42.232 "enable_placement_id": 0, 00:11:42.232 "enable_zerocopy_send_server": true, 00:11:42.232 "enable_zerocopy_send_client": false, 00:11:42.232 "zerocopy_threshold": 0, 00:11:42.232 "tls_version": 0, 00:11:42.232 "enable_ktls": false 00:11:42.232 } 00:11:42.232 }, 00:11:42.232 { 00:11:42.232 "method": "sock_impl_set_options", 00:11:42.232 "params": { 00:11:42.232 "impl_name": "ssl", 00:11:42.232 "recv_buf_size": 4096, 00:11:42.232 "send_buf_size": 4096, 00:11:42.232 "enable_recv_pipe": true, 00:11:42.232 "enable_quickack": false, 00:11:42.232 "enable_placement_id": 0, 00:11:42.232 "enable_zerocopy_send_server": true, 00:11:42.232 "enable_zerocopy_send_client": false, 00:11:42.232 "zerocopy_threshold": 0, 00:11:42.232 "tls_version": 0, 00:11:42.232 "enable_ktls": false 00:11:42.232 } 00:11:42.232 } 00:11:42.232 ] 00:11:42.232 }, 00:11:42.232 { 00:11:42.232 "subsystem": "vmd", 00:11:42.232 "config": [] 00:11:42.232 }, 00:11:42.232 { 00:11:42.232 "subsystem": "accel", 00:11:42.232 "config": [ 00:11:42.232 { 00:11:42.232 "method": "accel_set_options", 00:11:42.232 "params": { 00:11:42.232 "small_cache_size": 128, 00:11:42.232 "large_cache_size": 16, 00:11:42.232 "task_count": 2048, 00:11:42.233 "sequence_count": 2048, 00:11:42.233 "buf_count": 2048 00:11:42.233 } 00:11:42.233 } 00:11:42.233 ] 00:11:42.233 }, 00:11:42.233 { 00:11:42.233 "subsystem": "bdev", 00:11:42.233 "config": [ 00:11:42.233 { 00:11:42.233 "method": "bdev_set_options", 00:11:42.233 "params": { 00:11:42.233 "bdev_io_pool_size": 65535, 00:11:42.233 "bdev_io_cache_size": 256, 00:11:42.233 "bdev_auto_examine": true, 00:11:42.233 "iobuf_small_cache_size": 128, 00:11:42.233 "iobuf_large_cache_size": 16 00:11:42.233 } 00:11:42.233 }, 00:11:42.233 { 00:11:42.233 "method": "bdev_raid_set_options", 00:11:42.233 "params": { 00:11:42.233 "process_window_size_kb": 1024 00:11:42.233 } 00:11:42.233 }, 00:11:42.233 { 00:11:42.233 "method": "bdev_iscsi_set_options", 00:11:42.233 "params": { 00:11:42.233 "timeout_sec": 30 00:11:42.233 } 00:11:42.233 }, 00:11:42.233 { 00:11:42.233 "method": "bdev_nvme_set_options", 00:11:42.233 "params": { 00:11:42.233 "action_on_timeout": "none", 00:11:42.233 "timeout_us": 0, 00:11:42.233 "timeout_admin_us": 0, 00:11:42.233 "keep_alive_timeout_ms": 10000, 00:11:42.233 "transport_retry_count": 4, 00:11:42.233 "arbitration_burst": 0, 00:11:42.233 "low_priority_weight": 0, 00:11:42.233 "medium_priority_weight": 0, 00:11:42.233 "high_priority_weight": 0, 00:11:42.233 "nvme_adminq_poll_period_us": 10000, 00:11:42.233 "nvme_ioq_poll_period_us": 0, 00:11:42.233 "io_queue_requests": 512, 00:11:42.233 "delay_cmd_submit": true, 00:11:42.233 "bdev_retry_count": 3, 00:11:42.233 "transport_ack_timeout": 0, 00:11:42.233 "ctrlr_loss_timeout_sec": 0, 00:11:42.233 "reconnect_delay_sec": 0, 00:11:42.233 "fast_io_fail_timeout_sec": 0, 00:11:42.233 "generate_uuids": false, 00:11:42.233 "transport_tos": 0, 00:11:42.233 "io_path_stat": false, 00:11:42.233 "allow_accel_sequence": false 00:11:42.233 } 00:11:42.233 }, 00:11:42.233 { 00:11:42.233 "method": "bdev_nvme_attach_controller", 00:11:42.233 "params": { 00:11:42.233 "name": "TLSTEST", 00:11:42.233 "trtype": "TCP", 00:11:42.233 "adrfam": "IPv4", 00:11:42.233 "traddr": "10.0.0.2", 00:11:42.233 "trsvcid": "4420", 00:11:42.233 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:42.233 "prchk_reftag": false, 00:11:42.233 "prchk_guard": false, 00:11:42.233 "ctrlr_loss_timeout_sec": 0, 00:11:42.233 "reconnect_delay_sec": 0, 00:11:42.233 "fast_io_fail_timeout_sec": 0, 00:11:42.233 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:42.233 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:42.233 "hdgst": false, 00:11:42.233 "ddgst": false 00:11:42.233 } 00:11:42.233 }, 00:11:42.233 { 00:11:42.233 "method": "bdev_nvme_set_hotplug", 00:11:42.233 "params": { 00:11:42.233 "period_us": 100000, 00:11:42.233 "enable": false 00:11:42.233 } 00:11:42.233 }, 00:11:42.233 { 00:11:42.233 "method": "bdev_wait_for_examine" 00:11:42.233 } 00:11:42.233 ] 00:11:42.233 }, 00:11:42.233 { 00:11:42.233 "subsystem": "nbd", 00:11:42.233 "config": [] 00:11:42.233 } 00:11:42.233 ] 00:11:42.233 }' 00:11:42.233 [2024-12-15 05:52:03.623166] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:42.233 [2024-12-15 05:52:03.623608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77084 ] 00:11:42.233 [2024-12-15 05:52:03.772678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.233 [2024-12-15 05:52:03.812637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.492 [2024-12-15 05:52:03.940488] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:43.059 05:52:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:43.059 05:52:04 -- common/autotest_common.sh@862 -- # return 0 00:11:43.059 05:52:04 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:43.317 Running I/O for 10 seconds... 00:11:53.287 00:11:53.287 Latency(us) 00:11:53.287 [2024-12-15T05:52:14.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:53.287 [2024-12-15T05:52:14.928Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:53.287 Verification LBA range: start 0x0 length 0x2000 00:11:53.287 TLSTESTn1 : 10.01 5689.38 22.22 0.00 0.00 22460.67 4736.47 23950.43 00:11:53.287 [2024-12-15T05:52:14.928Z] =================================================================================================================== 00:11:53.287 [2024-12-15T05:52:14.928Z] Total : 5689.38 22.22 0.00 0.00 22460.67 4736.47 23950.43 00:11:53.287 0 00:11:53.287 05:52:14 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:53.287 05:52:14 -- target/tls.sh@223 -- # killprocess 77084 00:11:53.287 05:52:14 -- common/autotest_common.sh@936 -- # '[' -z 77084 ']' 00:11:53.287 05:52:14 -- common/autotest_common.sh@940 -- # kill -0 77084 00:11:53.287 05:52:14 -- common/autotest_common.sh@941 -- # uname 00:11:53.287 05:52:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:53.287 05:52:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77084 00:11:53.287 killing process with pid 77084 00:11:53.287 Received shutdown signal, test time was about 10.000000 seconds 00:11:53.287 00:11:53.287 Latency(us) 00:11:53.287 [2024-12-15T05:52:14.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:53.287 [2024-12-15T05:52:14.928Z] =================================================================================================================== 00:11:53.287 [2024-12-15T05:52:14.928Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:53.287 05:52:14 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:53.287 05:52:14 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:53.287 05:52:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77084' 00:11:53.287 05:52:14 -- common/autotest_common.sh@955 -- # kill 77084 00:11:53.287 05:52:14 -- common/autotest_common.sh@960 -- # wait 77084 00:11:53.545 05:52:14 -- target/tls.sh@224 -- # killprocess 77052 00:11:53.545 05:52:14 -- common/autotest_common.sh@936 -- # '[' -z 77052 ']' 00:11:53.545 05:52:14 -- common/autotest_common.sh@940 -- # kill -0 77052 00:11:53.545 05:52:14 -- common/autotest_common.sh@941 -- # uname 00:11:53.545 05:52:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:53.545 05:52:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77052 00:11:53.545 killing process with pid 77052 00:11:53.545 05:52:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:53.545 05:52:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:53.545 05:52:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77052' 00:11:53.545 05:52:14 -- common/autotest_common.sh@955 -- # kill 77052 00:11:53.545 05:52:14 -- common/autotest_common.sh@960 -- # wait 77052 00:11:53.545 05:52:15 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:11:53.545 05:52:15 -- target/tls.sh@227 -- # cleanup 00:11:53.545 05:52:15 -- target/tls.sh@15 -- # process_shm --id 0 00:11:53.545 05:52:15 -- common/autotest_common.sh@806 -- # type=--id 00:11:53.545 05:52:15 -- common/autotest_common.sh@807 -- # id=0 00:11:53.545 05:52:15 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:11:53.545 05:52:15 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:53.545 05:52:15 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:11:53.545 05:52:15 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:11:53.545 05:52:15 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:11:53.545 05:52:15 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:53.545 nvmf_trace.0 00:11:53.807 05:52:15 -- common/autotest_common.sh@821 -- # return 0 00:11:53.807 05:52:15 -- target/tls.sh@16 -- # killprocess 77084 00:11:53.807 05:52:15 -- common/autotest_common.sh@936 -- # '[' -z 77084 ']' 00:11:53.807 05:52:15 -- common/autotest_common.sh@940 -- # kill -0 77084 00:11:53.807 Process with pid 77084 is not found 00:11:53.807 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (77084) - No such process 00:11:53.807 05:52:15 -- common/autotest_common.sh@963 -- # echo 'Process with pid 77084 is not found' 00:11:53.807 05:52:15 -- target/tls.sh@17 -- # nvmftestfini 00:11:53.807 05:52:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:53.807 05:52:15 -- nvmf/common.sh@116 -- # sync 00:11:53.807 05:52:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:53.807 05:52:15 -- nvmf/common.sh@119 -- # set +e 00:11:53.807 05:52:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:53.807 05:52:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:53.807 rmmod nvme_tcp 00:11:53.807 rmmod nvme_fabrics 00:11:53.807 rmmod nvme_keyring 00:11:53.807 05:52:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:53.807 05:52:15 -- nvmf/common.sh@123 -- # set -e 00:11:53.807 05:52:15 -- nvmf/common.sh@124 -- # return 0 00:11:53.807 05:52:15 -- nvmf/common.sh@477 -- # '[' -n 77052 ']' 00:11:53.807 Process with pid 77052 is not found 00:11:53.807 05:52:15 -- nvmf/common.sh@478 -- # killprocess 77052 00:11:53.807 05:52:15 -- common/autotest_common.sh@936 -- # '[' -z 77052 ']' 00:11:53.807 05:52:15 -- common/autotest_common.sh@940 -- # kill -0 77052 00:11:53.807 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (77052) - No such process 00:11:53.807 05:52:15 -- common/autotest_common.sh@963 -- # echo 'Process with pid 77052 is not found' 00:11:53.807 05:52:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:53.807 05:52:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:53.807 05:52:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:53.807 05:52:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:53.807 05:52:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:53.807 05:52:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.808 05:52:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:53.808 05:52:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.808 05:52:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:53.808 05:52:15 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:53.808 ************************************ 00:11:53.808 END TEST nvmf_tls 00:11:53.808 ************************************ 00:11:53.808 00:11:53.808 real 1m9.358s 00:11:53.808 user 1m48.827s 00:11:53.808 sys 0m23.309s 00:11:53.808 05:52:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:53.808 05:52:15 -- common/autotest_common.sh@10 -- # set +x 00:11:53.808 05:52:15 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:11:53.808 05:52:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:53.808 05:52:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:53.808 05:52:15 -- common/autotest_common.sh@10 -- # set +x 00:11:53.808 ************************************ 00:11:53.808 START TEST nvmf_fips 00:11:53.808 ************************************ 00:11:53.808 05:52:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:11:53.808 * Looking for test storage... 00:11:54.066 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:11:54.066 05:52:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:54.066 05:52:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:54.066 05:52:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:54.066 05:52:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:54.066 05:52:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:54.066 05:52:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:54.066 05:52:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:54.066 05:52:15 -- scripts/common.sh@335 -- # IFS=.-: 00:11:54.066 05:52:15 -- scripts/common.sh@335 -- # read -ra ver1 00:11:54.066 05:52:15 -- scripts/common.sh@336 -- # IFS=.-: 00:11:54.066 05:52:15 -- scripts/common.sh@336 -- # read -ra ver2 00:11:54.066 05:52:15 -- scripts/common.sh@337 -- # local 'op=<' 00:11:54.066 05:52:15 -- scripts/common.sh@339 -- # ver1_l=2 00:11:54.066 05:52:15 -- scripts/common.sh@340 -- # ver2_l=1 00:11:54.066 05:52:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:54.066 05:52:15 -- scripts/common.sh@343 -- # case "$op" in 00:11:54.066 05:52:15 -- scripts/common.sh@344 -- # : 1 00:11:54.066 05:52:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:54.066 05:52:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:54.066 05:52:15 -- scripts/common.sh@364 -- # decimal 1 00:11:54.066 05:52:15 -- scripts/common.sh@352 -- # local d=1 00:11:54.066 05:52:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:54.066 05:52:15 -- scripts/common.sh@354 -- # echo 1 00:11:54.066 05:52:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:54.066 05:52:15 -- scripts/common.sh@365 -- # decimal 2 00:11:54.066 05:52:15 -- scripts/common.sh@352 -- # local d=2 00:11:54.066 05:52:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:54.066 05:52:15 -- scripts/common.sh@354 -- # echo 2 00:11:54.066 05:52:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:54.066 05:52:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:54.066 05:52:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:54.066 05:52:15 -- scripts/common.sh@367 -- # return 0 00:11:54.066 05:52:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:54.066 05:52:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:54.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.066 --rc genhtml_branch_coverage=1 00:11:54.066 --rc genhtml_function_coverage=1 00:11:54.066 --rc genhtml_legend=1 00:11:54.066 --rc geninfo_all_blocks=1 00:11:54.066 --rc geninfo_unexecuted_blocks=1 00:11:54.066 00:11:54.066 ' 00:11:54.066 05:52:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:54.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.066 --rc genhtml_branch_coverage=1 00:11:54.066 --rc genhtml_function_coverage=1 00:11:54.066 --rc genhtml_legend=1 00:11:54.066 --rc geninfo_all_blocks=1 00:11:54.066 --rc geninfo_unexecuted_blocks=1 00:11:54.066 00:11:54.066 ' 00:11:54.066 05:52:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:54.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.066 --rc genhtml_branch_coverage=1 00:11:54.066 --rc genhtml_function_coverage=1 00:11:54.066 --rc genhtml_legend=1 00:11:54.066 --rc geninfo_all_blocks=1 00:11:54.066 --rc geninfo_unexecuted_blocks=1 00:11:54.066 00:11:54.066 ' 00:11:54.066 05:52:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:54.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.066 --rc genhtml_branch_coverage=1 00:11:54.066 --rc genhtml_function_coverage=1 00:11:54.066 --rc genhtml_legend=1 00:11:54.066 --rc geninfo_all_blocks=1 00:11:54.066 --rc geninfo_unexecuted_blocks=1 00:11:54.066 00:11:54.066 ' 00:11:54.066 05:52:15 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:54.066 05:52:15 -- nvmf/common.sh@7 -- # uname -s 00:11:54.066 05:52:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.066 05:52:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.066 05:52:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.066 05:52:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.066 05:52:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.066 05:52:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.066 05:52:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.066 05:52:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.066 05:52:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.066 05:52:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.066 05:52:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:11:54.066 05:52:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:11:54.066 05:52:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.066 05:52:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.066 05:52:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:54.066 05:52:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:54.066 05:52:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.066 05:52:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.066 05:52:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.066 05:52:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.066 05:52:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.066 05:52:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.066 05:52:15 -- paths/export.sh@5 -- # export PATH 00:11:54.066 05:52:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.066 05:52:15 -- nvmf/common.sh@46 -- # : 0 00:11:54.066 05:52:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:54.066 05:52:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:54.066 05:52:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:54.066 05:52:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.066 05:52:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.066 05:52:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:54.066 05:52:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:54.066 05:52:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:54.066 05:52:15 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:54.066 05:52:15 -- fips/fips.sh@89 -- # check_openssl_version 00:11:54.066 05:52:15 -- fips/fips.sh@83 -- # local target=3.0.0 00:11:54.066 05:52:15 -- fips/fips.sh@85 -- # openssl version 00:11:54.066 05:52:15 -- fips/fips.sh@85 -- # awk '{print $2}' 00:11:54.066 05:52:15 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:11:54.066 05:52:15 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:11:54.066 05:52:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:54.066 05:52:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:54.066 05:52:15 -- scripts/common.sh@335 -- # IFS=.-: 00:11:54.066 05:52:15 -- scripts/common.sh@335 -- # read -ra ver1 00:11:54.066 05:52:15 -- scripts/common.sh@336 -- # IFS=.-: 00:11:54.066 05:52:15 -- scripts/common.sh@336 -- # read -ra ver2 00:11:54.066 05:52:15 -- scripts/common.sh@337 -- # local 'op=>=' 00:11:54.066 05:52:15 -- scripts/common.sh@339 -- # ver1_l=3 00:11:54.066 05:52:15 -- scripts/common.sh@340 -- # ver2_l=3 00:11:54.066 05:52:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:54.066 05:52:15 -- scripts/common.sh@343 -- # case "$op" in 00:11:54.066 05:52:15 -- scripts/common.sh@347 -- # : 1 00:11:54.066 05:52:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:54.066 05:52:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:54.066 05:52:15 -- scripts/common.sh@364 -- # decimal 3 00:11:54.066 05:52:15 -- scripts/common.sh@352 -- # local d=3 00:11:54.066 05:52:15 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:11:54.066 05:52:15 -- scripts/common.sh@354 -- # echo 3 00:11:54.066 05:52:15 -- scripts/common.sh@364 -- # ver1[v]=3 00:11:54.066 05:52:15 -- scripts/common.sh@365 -- # decimal 3 00:11:54.066 05:52:15 -- scripts/common.sh@352 -- # local d=3 00:11:54.066 05:52:15 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:11:54.066 05:52:15 -- scripts/common.sh@354 -- # echo 3 00:11:54.066 05:52:15 -- scripts/common.sh@365 -- # ver2[v]=3 00:11:54.066 05:52:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:54.066 05:52:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:54.066 05:52:15 -- scripts/common.sh@363 -- # (( v++ )) 00:11:54.066 05:52:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:54.066 05:52:15 -- scripts/common.sh@364 -- # decimal 1 00:11:54.066 05:52:15 -- scripts/common.sh@352 -- # local d=1 00:11:54.066 05:52:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:54.066 05:52:15 -- scripts/common.sh@354 -- # echo 1 00:11:54.066 05:52:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:54.066 05:52:15 -- scripts/common.sh@365 -- # decimal 0 00:11:54.066 05:52:15 -- scripts/common.sh@352 -- # local d=0 00:11:54.066 05:52:15 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:11:54.066 05:52:15 -- scripts/common.sh@354 -- # echo 0 00:11:54.066 05:52:15 -- scripts/common.sh@365 -- # ver2[v]=0 00:11:54.066 05:52:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:54.066 05:52:15 -- scripts/common.sh@366 -- # return 0 00:11:54.066 05:52:15 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:11:54.066 05:52:15 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:11:54.066 05:52:15 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:11:54.066 05:52:15 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:11:54.066 05:52:15 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:11:54.066 05:52:15 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:11:54.066 05:52:15 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:11:54.066 05:52:15 -- fips/fips.sh@113 -- # build_openssl_config 00:11:54.066 05:52:15 -- fips/fips.sh@37 -- # cat 00:11:54.066 05:52:15 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:11:54.066 05:52:15 -- fips/fips.sh@58 -- # cat - 00:11:54.066 05:52:15 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:11:54.066 05:52:15 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:11:54.066 05:52:15 -- fips/fips.sh@116 -- # mapfile -t providers 00:11:54.066 05:52:15 -- fips/fips.sh@116 -- # openssl list -providers 00:11:54.066 05:52:15 -- fips/fips.sh@116 -- # grep name 00:11:54.066 05:52:15 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:11:54.066 05:52:15 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:11:54.066 05:52:15 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:11:54.066 05:52:15 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:11:54.066 05:52:15 -- fips/fips.sh@127 -- # : 00:11:54.066 05:52:15 -- common/autotest_common.sh@650 -- # local es=0 00:11:54.066 05:52:15 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:11:54.066 05:52:15 -- common/autotest_common.sh@638 -- # local arg=openssl 00:11:54.067 05:52:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:54.067 05:52:15 -- common/autotest_common.sh@642 -- # type -t openssl 00:11:54.067 05:52:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:54.067 05:52:15 -- common/autotest_common.sh@644 -- # type -P openssl 00:11:54.067 05:52:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:54.067 05:52:15 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:11:54.067 05:52:15 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:11:54.067 05:52:15 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:11:54.325 Error setting digest 00:11:54.325 40C2ADCBFD7E0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:11:54.325 40C2ADCBFD7E0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:11:54.325 05:52:15 -- common/autotest_common.sh@653 -- # es=1 00:11:54.325 05:52:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:54.325 05:52:15 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:54.325 05:52:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:54.325 05:52:15 -- fips/fips.sh@130 -- # nvmftestinit 00:11:54.325 05:52:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:54.325 05:52:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:54.325 05:52:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:54.325 05:52:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:54.325 05:52:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:54.325 05:52:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.325 05:52:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:54.325 05:52:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.325 05:52:15 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:54.325 05:52:15 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:54.325 05:52:15 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:54.325 05:52:15 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:54.325 05:52:15 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:54.325 05:52:15 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:54.325 05:52:15 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:54.325 05:52:15 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:54.325 05:52:15 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:54.325 05:52:15 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:54.325 05:52:15 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:54.325 05:52:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:54.325 05:52:15 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:54.325 05:52:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:54.325 05:52:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:54.325 05:52:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:54.325 05:52:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:54.325 05:52:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:54.325 05:52:15 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:54.325 05:52:15 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:54.325 Cannot find device "nvmf_tgt_br" 00:11:54.325 05:52:15 -- nvmf/common.sh@154 -- # true 00:11:54.325 05:52:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:54.325 Cannot find device "nvmf_tgt_br2" 00:11:54.325 05:52:15 -- nvmf/common.sh@155 -- # true 00:11:54.325 05:52:15 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:54.325 05:52:15 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:54.325 Cannot find device "nvmf_tgt_br" 00:11:54.325 05:52:15 -- nvmf/common.sh@157 -- # true 00:11:54.325 05:52:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:54.325 Cannot find device "nvmf_tgt_br2" 00:11:54.325 05:52:15 -- nvmf/common.sh@158 -- # true 00:11:54.325 05:52:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:54.325 05:52:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:54.325 05:52:15 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:54.325 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:54.325 05:52:15 -- nvmf/common.sh@161 -- # true 00:11:54.326 05:52:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:54.326 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:54.326 05:52:15 -- nvmf/common.sh@162 -- # true 00:11:54.326 05:52:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:54.326 05:52:15 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:54.326 05:52:15 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:54.326 05:52:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:54.326 05:52:15 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:54.326 05:52:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:54.326 05:52:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:54.326 05:52:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:54.326 05:52:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:54.326 05:52:15 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:54.326 05:52:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:54.326 05:52:15 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:54.326 05:52:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:54.584 05:52:15 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:54.584 05:52:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:54.584 05:52:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:54.584 05:52:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:54.584 05:52:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:54.584 05:52:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:54.584 05:52:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:54.584 05:52:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:54.584 05:52:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:54.584 05:52:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:54.584 05:52:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:54.584 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:54.584 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:11:54.584 00:11:54.584 --- 10.0.0.2 ping statistics --- 00:11:54.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.584 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:11:54.584 05:52:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:54.584 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:54.584 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:11:54.584 00:11:54.584 --- 10.0.0.3 ping statistics --- 00:11:54.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.584 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:11:54.584 05:52:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:54.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:54.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:11:54.584 00:11:54.584 --- 10.0.0.1 ping statistics --- 00:11:54.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.584 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:11:54.584 05:52:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:54.584 05:52:16 -- nvmf/common.sh@421 -- # return 0 00:11:54.584 05:52:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:54.584 05:52:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:54.584 05:52:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:54.584 05:52:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:54.584 05:52:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:54.584 05:52:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:54.584 05:52:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:54.584 05:52:16 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:11:54.584 05:52:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:54.584 05:52:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:54.584 05:52:16 -- common/autotest_common.sh@10 -- # set +x 00:11:54.584 05:52:16 -- nvmf/common.sh@469 -- # nvmfpid=77443 00:11:54.584 05:52:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:54.584 05:52:16 -- nvmf/common.sh@470 -- # waitforlisten 77443 00:11:54.584 05:52:16 -- common/autotest_common.sh@829 -- # '[' -z 77443 ']' 00:11:54.584 05:52:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.584 05:52:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:54.584 05:52:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.584 05:52:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:54.584 05:52:16 -- common/autotest_common.sh@10 -- # set +x 00:11:54.584 [2024-12-15 05:52:16.159177] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:54.584 [2024-12-15 05:52:16.159274] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.842 [2024-12-15 05:52:16.298147] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.842 [2024-12-15 05:52:16.335288] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:54.842 [2024-12-15 05:52:16.335457] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:54.842 [2024-12-15 05:52:16.335472] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:54.842 [2024-12-15 05:52:16.335482] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:54.842 [2024-12-15 05:52:16.335513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.777 05:52:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:55.777 05:52:17 -- common/autotest_common.sh@862 -- # return 0 00:11:55.777 05:52:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:55.777 05:52:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:55.777 05:52:17 -- common/autotest_common.sh@10 -- # set +x 00:11:55.777 05:52:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:55.777 05:52:17 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:11:55.777 05:52:17 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:11:55.777 05:52:17 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:55.777 05:52:17 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:11:55.777 05:52:17 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:55.777 05:52:17 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:55.777 05:52:17 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:55.777 05:52:17 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:56.036 [2024-12-15 05:52:17.423462] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:56.036 [2024-12-15 05:52:17.439408] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:56.036 [2024-12-15 05:52:17.439612] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.036 malloc0 00:11:56.036 05:52:17 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:56.036 05:52:17 -- fips/fips.sh@147 -- # bdevperf_pid=77477 00:11:56.036 05:52:17 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:56.036 05:52:17 -- fips/fips.sh@148 -- # waitforlisten 77477 /var/tmp/bdevperf.sock 00:11:56.036 05:52:17 -- common/autotest_common.sh@829 -- # '[' -z 77477 ']' 00:11:56.036 05:52:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:56.036 05:52:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:56.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:56.036 05:52:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:56.036 05:52:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:56.036 05:52:17 -- common/autotest_common.sh@10 -- # set +x 00:11:56.036 [2024-12-15 05:52:17.565349] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:56.036 [2024-12-15 05:52:17.565447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77477 ] 00:11:56.295 [2024-12-15 05:52:17.702108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.295 [2024-12-15 05:52:17.741385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.229 05:52:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:57.229 05:52:18 -- common/autotest_common.sh@862 -- # return 0 00:11:57.229 05:52:18 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:57.229 [2024-12-15 05:52:18.726094] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:57.229 TLSTESTn1 00:11:57.229 05:52:18 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:57.487 Running I/O for 10 seconds... 00:12:07.464 00:12:07.465 Latency(us) 00:12:07.465 [2024-12-15T05:52:29.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:07.465 [2024-12-15T05:52:29.106Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:07.465 Verification LBA range: start 0x0 length 0x2000 00:12:07.465 TLSTESTn1 : 10.01 5993.32 23.41 0.00 0.00 21322.77 4349.21 27048.49 00:12:07.465 [2024-12-15T05:52:29.106Z] =================================================================================================================== 00:12:07.465 [2024-12-15T05:52:29.106Z] Total : 5993.32 23.41 0.00 0.00 21322.77 4349.21 27048.49 00:12:07.465 0 00:12:07.465 05:52:28 -- fips/fips.sh@1 -- # cleanup 00:12:07.465 05:52:28 -- fips/fips.sh@15 -- # process_shm --id 0 00:12:07.465 05:52:28 -- common/autotest_common.sh@806 -- # type=--id 00:12:07.465 05:52:28 -- common/autotest_common.sh@807 -- # id=0 00:12:07.465 05:52:28 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:07.465 05:52:28 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:07.465 05:52:28 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:07.465 05:52:28 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:07.465 05:52:28 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:07.465 05:52:28 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:07.465 nvmf_trace.0 00:12:07.465 05:52:29 -- common/autotest_common.sh@821 -- # return 0 00:12:07.465 05:52:29 -- fips/fips.sh@16 -- # killprocess 77477 00:12:07.465 05:52:29 -- common/autotest_common.sh@936 -- # '[' -z 77477 ']' 00:12:07.465 05:52:29 -- common/autotest_common.sh@940 -- # kill -0 77477 00:12:07.465 05:52:29 -- common/autotest_common.sh@941 -- # uname 00:12:07.465 05:52:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:07.465 05:52:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77477 00:12:07.465 05:52:29 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:07.465 05:52:29 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:07.465 killing process with pid 77477 00:12:07.465 05:52:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77477' 00:12:07.465 Received shutdown signal, test time was about 10.000000 seconds 00:12:07.465 00:12:07.465 Latency(us) 00:12:07.465 [2024-12-15T05:52:29.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:07.465 [2024-12-15T05:52:29.106Z] =================================================================================================================== 00:12:07.465 [2024-12-15T05:52:29.106Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:07.465 05:52:29 -- common/autotest_common.sh@955 -- # kill 77477 00:12:07.465 05:52:29 -- common/autotest_common.sh@960 -- # wait 77477 00:12:07.724 05:52:29 -- fips/fips.sh@17 -- # nvmftestfini 00:12:07.724 05:52:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:07.724 05:52:29 -- nvmf/common.sh@116 -- # sync 00:12:07.724 05:52:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:07.724 05:52:29 -- nvmf/common.sh@119 -- # set +e 00:12:07.724 05:52:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:07.724 05:52:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:07.724 rmmod nvme_tcp 00:12:07.724 rmmod nvme_fabrics 00:12:07.724 rmmod nvme_keyring 00:12:07.724 05:52:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:07.724 05:52:29 -- nvmf/common.sh@123 -- # set -e 00:12:07.724 05:52:29 -- nvmf/common.sh@124 -- # return 0 00:12:07.724 05:52:29 -- nvmf/common.sh@477 -- # '[' -n 77443 ']' 00:12:07.724 05:52:29 -- nvmf/common.sh@478 -- # killprocess 77443 00:12:07.724 05:52:29 -- common/autotest_common.sh@936 -- # '[' -z 77443 ']' 00:12:07.724 05:52:29 -- common/autotest_common.sh@940 -- # kill -0 77443 00:12:07.724 05:52:29 -- common/autotest_common.sh@941 -- # uname 00:12:07.724 05:52:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:07.724 05:52:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77443 00:12:07.724 05:52:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:07.724 05:52:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:07.724 05:52:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77443' 00:12:07.724 killing process with pid 77443 00:12:07.724 05:52:29 -- common/autotest_common.sh@955 -- # kill 77443 00:12:07.724 05:52:29 -- common/autotest_common.sh@960 -- # wait 77443 00:12:07.983 05:52:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:07.983 05:52:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:07.983 05:52:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:07.983 05:52:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:07.983 05:52:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:07.983 05:52:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.983 05:52:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:07.983 05:52:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.983 05:52:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:07.983 05:52:29 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:07.983 00:12:07.983 real 0m14.118s 00:12:07.983 user 0m19.117s 00:12:07.983 sys 0m5.719s 00:12:07.984 05:52:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:07.984 05:52:29 -- common/autotest_common.sh@10 -- # set +x 00:12:07.984 ************************************ 00:12:07.984 END TEST nvmf_fips 00:12:07.984 ************************************ 00:12:07.984 05:52:29 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:12:07.984 05:52:29 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:12:07.984 05:52:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:07.984 05:52:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:07.984 05:52:29 -- common/autotest_common.sh@10 -- # set +x 00:12:07.984 ************************************ 00:12:07.984 START TEST nvmf_fuzz 00:12:07.984 ************************************ 00:12:07.984 05:52:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:12:07.984 * Looking for test storage... 00:12:07.984 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:07.984 05:52:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:08.243 05:52:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:08.243 05:52:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:08.243 05:52:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:08.243 05:52:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:08.243 05:52:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:08.243 05:52:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:08.243 05:52:29 -- scripts/common.sh@335 -- # IFS=.-: 00:12:08.243 05:52:29 -- scripts/common.sh@335 -- # read -ra ver1 00:12:08.243 05:52:29 -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.243 05:52:29 -- scripts/common.sh@336 -- # read -ra ver2 00:12:08.243 05:52:29 -- scripts/common.sh@337 -- # local 'op=<' 00:12:08.243 05:52:29 -- scripts/common.sh@339 -- # ver1_l=2 00:12:08.243 05:52:29 -- scripts/common.sh@340 -- # ver2_l=1 00:12:08.243 05:52:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:08.243 05:52:29 -- scripts/common.sh@343 -- # case "$op" in 00:12:08.243 05:52:29 -- scripts/common.sh@344 -- # : 1 00:12:08.243 05:52:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:08.243 05:52:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.243 05:52:29 -- scripts/common.sh@364 -- # decimal 1 00:12:08.243 05:52:29 -- scripts/common.sh@352 -- # local d=1 00:12:08.243 05:52:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.243 05:52:29 -- scripts/common.sh@354 -- # echo 1 00:12:08.243 05:52:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:08.243 05:52:29 -- scripts/common.sh@365 -- # decimal 2 00:12:08.243 05:52:29 -- scripts/common.sh@352 -- # local d=2 00:12:08.243 05:52:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.243 05:52:29 -- scripts/common.sh@354 -- # echo 2 00:12:08.243 05:52:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:08.243 05:52:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:08.243 05:52:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:08.243 05:52:29 -- scripts/common.sh@367 -- # return 0 00:12:08.243 05:52:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.243 05:52:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:08.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.243 --rc genhtml_branch_coverage=1 00:12:08.243 --rc genhtml_function_coverage=1 00:12:08.243 --rc genhtml_legend=1 00:12:08.243 --rc geninfo_all_blocks=1 00:12:08.243 --rc geninfo_unexecuted_blocks=1 00:12:08.243 00:12:08.243 ' 00:12:08.243 05:52:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:08.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.243 --rc genhtml_branch_coverage=1 00:12:08.243 --rc genhtml_function_coverage=1 00:12:08.243 --rc genhtml_legend=1 00:12:08.243 --rc geninfo_all_blocks=1 00:12:08.243 --rc geninfo_unexecuted_blocks=1 00:12:08.243 00:12:08.243 ' 00:12:08.243 05:52:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:08.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.243 --rc genhtml_branch_coverage=1 00:12:08.243 --rc genhtml_function_coverage=1 00:12:08.243 --rc genhtml_legend=1 00:12:08.243 --rc geninfo_all_blocks=1 00:12:08.243 --rc geninfo_unexecuted_blocks=1 00:12:08.243 00:12:08.243 ' 00:12:08.243 05:52:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:08.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.243 --rc genhtml_branch_coverage=1 00:12:08.243 --rc genhtml_function_coverage=1 00:12:08.243 --rc genhtml_legend=1 00:12:08.243 --rc geninfo_all_blocks=1 00:12:08.243 --rc geninfo_unexecuted_blocks=1 00:12:08.243 00:12:08.243 ' 00:12:08.243 05:52:29 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:08.243 05:52:29 -- nvmf/common.sh@7 -- # uname -s 00:12:08.243 05:52:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.244 05:52:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.244 05:52:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.244 05:52:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.244 05:52:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.244 05:52:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.244 05:52:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.244 05:52:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.244 05:52:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.244 05:52:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.244 05:52:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:12:08.244 05:52:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:12:08.244 05:52:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.244 05:52:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.244 05:52:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:08.244 05:52:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:08.244 05:52:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.244 05:52:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.244 05:52:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.244 05:52:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.244 05:52:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.244 05:52:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.244 05:52:29 -- paths/export.sh@5 -- # export PATH 00:12:08.244 05:52:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.244 05:52:29 -- nvmf/common.sh@46 -- # : 0 00:12:08.244 05:52:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:08.244 05:52:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:08.244 05:52:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:08.244 05:52:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.244 05:52:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.244 05:52:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:08.244 05:52:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:08.244 05:52:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:08.244 05:52:29 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:12:08.244 05:52:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:08.244 05:52:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.244 05:52:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:08.244 05:52:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:08.244 05:52:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:08.244 05:52:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.244 05:52:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:08.244 05:52:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.244 05:52:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:08.244 05:52:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:08.244 05:52:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:08.244 05:52:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:08.244 05:52:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:08.244 05:52:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:08.244 05:52:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.244 05:52:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:08.244 05:52:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:08.244 05:52:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:08.244 05:52:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:08.244 05:52:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:08.244 05:52:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:08.244 05:52:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.244 05:52:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:08.244 05:52:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:08.244 05:52:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:08.244 05:52:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:08.244 05:52:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:08.244 05:52:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:08.244 Cannot find device "nvmf_tgt_br" 00:12:08.244 05:52:29 -- nvmf/common.sh@154 -- # true 00:12:08.244 05:52:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:08.244 Cannot find device "nvmf_tgt_br2" 00:12:08.244 05:52:29 -- nvmf/common.sh@155 -- # true 00:12:08.244 05:52:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:08.244 05:52:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:08.244 Cannot find device "nvmf_tgt_br" 00:12:08.244 05:52:29 -- nvmf/common.sh@157 -- # true 00:12:08.244 05:52:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:08.244 Cannot find device "nvmf_tgt_br2" 00:12:08.244 05:52:29 -- nvmf/common.sh@158 -- # true 00:12:08.244 05:52:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:08.244 05:52:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:08.503 05:52:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:08.503 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:08.503 05:52:29 -- nvmf/common.sh@161 -- # true 00:12:08.503 05:52:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:08.503 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:08.503 05:52:29 -- nvmf/common.sh@162 -- # true 00:12:08.503 05:52:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:08.503 05:52:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:08.503 05:52:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:08.503 05:52:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:08.503 05:52:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:08.503 05:52:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:08.503 05:52:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:08.503 05:52:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:08.503 05:52:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:08.503 05:52:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:08.503 05:52:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:08.503 05:52:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:08.503 05:52:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:08.503 05:52:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:08.503 05:52:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:08.503 05:52:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:08.503 05:52:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:08.503 05:52:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:08.503 05:52:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:08.503 05:52:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:08.503 05:52:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:08.503 05:52:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:08.503 05:52:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:08.503 05:52:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:08.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:12:08.503 00:12:08.503 --- 10.0.0.2 ping statistics --- 00:12:08.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.503 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:12:08.503 05:52:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:08.503 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:08.503 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:12:08.503 00:12:08.503 --- 10.0.0.3 ping statistics --- 00:12:08.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.503 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:12:08.503 05:52:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:08.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:12:08.503 00:12:08.503 --- 10.0.0.1 ping statistics --- 00:12:08.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.503 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:12:08.503 05:52:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.503 05:52:30 -- nvmf/common.sh@421 -- # return 0 00:12:08.503 05:52:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:08.503 05:52:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.503 05:52:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:08.503 05:52:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:08.503 05:52:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.503 05:52:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:08.503 05:52:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:08.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.503 05:52:30 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=77805 00:12:08.503 05:52:30 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:08.503 05:52:30 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:08.503 05:52:30 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 77805 00:12:08.503 05:52:30 -- common/autotest_common.sh@829 -- # '[' -z 77805 ']' 00:12:08.503 05:52:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.503 05:52:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:08.503 05:52:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.503 05:52:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:08.503 05:52:30 -- common/autotest_common.sh@10 -- # set +x 00:12:09.881 05:52:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:09.881 05:52:31 -- common/autotest_common.sh@862 -- # return 0 00:12:09.881 05:52:31 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:09.881 05:52:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.881 05:52:31 -- common/autotest_common.sh@10 -- # set +x 00:12:09.881 05:52:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.881 05:52:31 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:12:09.881 05:52:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.881 05:52:31 -- common/autotest_common.sh@10 -- # set +x 00:12:09.881 Malloc0 00:12:09.881 05:52:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.881 05:52:31 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:09.881 05:52:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.881 05:52:31 -- common/autotest_common.sh@10 -- # set +x 00:12:09.881 05:52:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.881 05:52:31 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:09.881 05:52:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.881 05:52:31 -- common/autotest_common.sh@10 -- # set +x 00:12:09.881 05:52:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.881 05:52:31 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.881 05:52:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.882 05:52:31 -- common/autotest_common.sh@10 -- # set +x 00:12:09.882 05:52:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.882 05:52:31 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:12:09.882 05:52:31 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:12:09.882 Shutting down the fuzz application 00:12:09.882 05:52:31 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:12:10.141 Shutting down the fuzz application 00:12:10.141 05:52:31 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.141 05:52:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.141 05:52:31 -- common/autotest_common.sh@10 -- # set +x 00:12:10.400 05:52:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.400 05:52:31 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:10.400 05:52:31 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:12:10.400 05:52:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:10.400 05:52:31 -- nvmf/common.sh@116 -- # sync 00:12:10.400 05:52:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:10.400 05:52:31 -- nvmf/common.sh@119 -- # set +e 00:12:10.400 05:52:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:10.400 05:52:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:10.400 rmmod nvme_tcp 00:12:10.400 rmmod nvme_fabrics 00:12:10.400 rmmod nvme_keyring 00:12:10.400 05:52:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:10.400 05:52:31 -- nvmf/common.sh@123 -- # set -e 00:12:10.400 05:52:31 -- nvmf/common.sh@124 -- # return 0 00:12:10.400 05:52:31 -- nvmf/common.sh@477 -- # '[' -n 77805 ']' 00:12:10.400 05:52:31 -- nvmf/common.sh@478 -- # killprocess 77805 00:12:10.400 05:52:31 -- common/autotest_common.sh@936 -- # '[' -z 77805 ']' 00:12:10.400 05:52:31 -- common/autotest_common.sh@940 -- # kill -0 77805 00:12:10.400 05:52:31 -- common/autotest_common.sh@941 -- # uname 00:12:10.400 05:52:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:10.400 05:52:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77805 00:12:10.400 killing process with pid 77805 00:12:10.400 05:52:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:10.400 05:52:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:10.400 05:52:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77805' 00:12:10.400 05:52:31 -- common/autotest_common.sh@955 -- # kill 77805 00:12:10.400 05:52:31 -- common/autotest_common.sh@960 -- # wait 77805 00:12:10.659 05:52:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:10.659 05:52:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:10.659 05:52:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:10.659 05:52:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:10.659 05:52:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:10.659 05:52:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.659 05:52:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:10.659 05:52:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.659 05:52:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:10.659 05:52:32 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:12:10.659 ************************************ 00:12:10.659 END TEST nvmf_fuzz 00:12:10.659 ************************************ 00:12:10.659 00:12:10.659 real 0m2.590s 00:12:10.659 user 0m2.669s 00:12:10.659 sys 0m0.596s 00:12:10.659 05:52:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:10.659 05:52:32 -- common/autotest_common.sh@10 -- # set +x 00:12:10.659 05:52:32 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:12:10.659 05:52:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:10.659 05:52:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:10.659 05:52:32 -- common/autotest_common.sh@10 -- # set +x 00:12:10.659 ************************************ 00:12:10.659 START TEST nvmf_multiconnection 00:12:10.659 ************************************ 00:12:10.659 05:52:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:12:10.659 * Looking for test storage... 00:12:10.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:10.659 05:52:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:10.659 05:52:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:10.659 05:52:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:10.919 05:52:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:10.919 05:52:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:10.919 05:52:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:10.919 05:52:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:10.919 05:52:32 -- scripts/common.sh@335 -- # IFS=.-: 00:12:10.919 05:52:32 -- scripts/common.sh@335 -- # read -ra ver1 00:12:10.919 05:52:32 -- scripts/common.sh@336 -- # IFS=.-: 00:12:10.919 05:52:32 -- scripts/common.sh@336 -- # read -ra ver2 00:12:10.919 05:52:32 -- scripts/common.sh@337 -- # local 'op=<' 00:12:10.919 05:52:32 -- scripts/common.sh@339 -- # ver1_l=2 00:12:10.919 05:52:32 -- scripts/common.sh@340 -- # ver2_l=1 00:12:10.919 05:52:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:10.919 05:52:32 -- scripts/common.sh@343 -- # case "$op" in 00:12:10.919 05:52:32 -- scripts/common.sh@344 -- # : 1 00:12:10.919 05:52:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:10.919 05:52:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:10.919 05:52:32 -- scripts/common.sh@364 -- # decimal 1 00:12:10.919 05:52:32 -- scripts/common.sh@352 -- # local d=1 00:12:10.919 05:52:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:10.919 05:52:32 -- scripts/common.sh@354 -- # echo 1 00:12:10.919 05:52:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:10.919 05:52:32 -- scripts/common.sh@365 -- # decimal 2 00:12:10.919 05:52:32 -- scripts/common.sh@352 -- # local d=2 00:12:10.919 05:52:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:10.919 05:52:32 -- scripts/common.sh@354 -- # echo 2 00:12:10.919 05:52:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:10.919 05:52:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:10.919 05:52:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:10.919 05:52:32 -- scripts/common.sh@367 -- # return 0 00:12:10.919 05:52:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:10.919 05:52:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:10.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.919 --rc genhtml_branch_coverage=1 00:12:10.919 --rc genhtml_function_coverage=1 00:12:10.919 --rc genhtml_legend=1 00:12:10.919 --rc geninfo_all_blocks=1 00:12:10.919 --rc geninfo_unexecuted_blocks=1 00:12:10.919 00:12:10.919 ' 00:12:10.919 05:52:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:10.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.919 --rc genhtml_branch_coverage=1 00:12:10.919 --rc genhtml_function_coverage=1 00:12:10.919 --rc genhtml_legend=1 00:12:10.919 --rc geninfo_all_blocks=1 00:12:10.919 --rc geninfo_unexecuted_blocks=1 00:12:10.919 00:12:10.919 ' 00:12:10.919 05:52:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:10.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.919 --rc genhtml_branch_coverage=1 00:12:10.919 --rc genhtml_function_coverage=1 00:12:10.919 --rc genhtml_legend=1 00:12:10.919 --rc geninfo_all_blocks=1 00:12:10.919 --rc geninfo_unexecuted_blocks=1 00:12:10.919 00:12:10.919 ' 00:12:10.919 05:52:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:10.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.919 --rc genhtml_branch_coverage=1 00:12:10.919 --rc genhtml_function_coverage=1 00:12:10.919 --rc genhtml_legend=1 00:12:10.919 --rc geninfo_all_blocks=1 00:12:10.919 --rc geninfo_unexecuted_blocks=1 00:12:10.919 00:12:10.919 ' 00:12:10.919 05:52:32 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:10.919 05:52:32 -- nvmf/common.sh@7 -- # uname -s 00:12:10.919 05:52:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.919 05:52:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.919 05:52:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.919 05:52:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.919 05:52:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.919 05:52:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.919 05:52:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.919 05:52:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.919 05:52:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.919 05:52:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.919 05:52:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:12:10.920 05:52:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:12:10.920 05:52:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.920 05:52:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.920 05:52:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:10.920 05:52:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:10.920 05:52:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.920 05:52:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.920 05:52:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.920 05:52:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.920 05:52:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.920 05:52:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.920 05:52:32 -- paths/export.sh@5 -- # export PATH 00:12:10.920 05:52:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.920 05:52:32 -- nvmf/common.sh@46 -- # : 0 00:12:10.920 05:52:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:10.920 05:52:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:10.920 05:52:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:10.920 05:52:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.920 05:52:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.920 05:52:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:10.920 05:52:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:10.920 05:52:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:10.920 05:52:32 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:10.920 05:52:32 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:10.920 05:52:32 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:12:10.920 05:52:32 -- target/multiconnection.sh@16 -- # nvmftestinit 00:12:10.920 05:52:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:10.920 05:52:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.920 05:52:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:10.920 05:52:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:10.920 05:52:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:10.920 05:52:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.920 05:52:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:10.920 05:52:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.920 05:52:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:10.920 05:52:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:10.920 05:52:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:10.920 05:52:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:10.920 05:52:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:10.920 05:52:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:10.920 05:52:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:10.920 05:52:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:10.920 05:52:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:10.920 05:52:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:10.920 05:52:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:10.920 05:52:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:10.920 05:52:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:10.920 05:52:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:10.920 05:52:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:10.920 05:52:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:10.920 05:52:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:10.920 05:52:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:10.920 05:52:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:10.920 05:52:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:10.920 Cannot find device "nvmf_tgt_br" 00:12:10.920 05:52:32 -- nvmf/common.sh@154 -- # true 00:12:10.920 05:52:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:10.920 Cannot find device "nvmf_tgt_br2" 00:12:10.920 05:52:32 -- nvmf/common.sh@155 -- # true 00:12:10.920 05:52:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:10.920 05:52:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:10.920 Cannot find device "nvmf_tgt_br" 00:12:10.920 05:52:32 -- nvmf/common.sh@157 -- # true 00:12:10.920 05:52:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:10.920 Cannot find device "nvmf_tgt_br2" 00:12:10.920 05:52:32 -- nvmf/common.sh@158 -- # true 00:12:10.920 05:52:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:10.920 05:52:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:10.920 05:52:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:10.920 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:10.920 05:52:32 -- nvmf/common.sh@161 -- # true 00:12:10.920 05:52:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:10.920 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:10.920 05:52:32 -- nvmf/common.sh@162 -- # true 00:12:10.920 05:52:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:10.920 05:52:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:10.920 05:52:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:11.180 05:52:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:11.180 05:52:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:11.180 05:52:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:11.180 05:52:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:11.180 05:52:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:11.180 05:52:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:11.180 05:52:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:11.180 05:52:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:11.180 05:52:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:11.180 05:52:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:11.180 05:52:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:11.180 05:52:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:11.180 05:52:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:11.180 05:52:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:11.180 05:52:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:11.180 05:52:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:11.180 05:52:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:11.180 05:52:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:11.180 05:52:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:11.180 05:52:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:11.180 05:52:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:11.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:11.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:12:11.180 00:12:11.180 --- 10.0.0.2 ping statistics --- 00:12:11.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.180 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:12:11.180 05:52:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:11.180 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:11.180 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:12:11.180 00:12:11.180 --- 10.0.0.3 ping statistics --- 00:12:11.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.180 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:12:11.180 05:52:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:11.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:11.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:12:11.180 00:12:11.180 --- 10.0.0.1 ping statistics --- 00:12:11.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.180 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:12:11.180 05:52:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:11.180 05:52:32 -- nvmf/common.sh@421 -- # return 0 00:12:11.180 05:52:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:11.180 05:52:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:11.180 05:52:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:11.180 05:52:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:11.180 05:52:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:11.180 05:52:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:11.180 05:52:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:11.180 05:52:32 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:12:11.180 05:52:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:11.180 05:52:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:11.180 05:52:32 -- common/autotest_common.sh@10 -- # set +x 00:12:11.180 05:52:32 -- nvmf/common.sh@469 -- # nvmfpid=78003 00:12:11.180 05:52:32 -- nvmf/common.sh@470 -- # waitforlisten 78003 00:12:11.180 05:52:32 -- common/autotest_common.sh@829 -- # '[' -z 78003 ']' 00:12:11.180 05:52:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:11.180 05:52:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.180 05:52:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:11.180 05:52:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.180 05:52:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:11.180 05:52:32 -- common/autotest_common.sh@10 -- # set +x 00:12:11.439 [2024-12-15 05:52:32.840278] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:11.439 [2024-12-15 05:52:32.840383] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.439 [2024-12-15 05:52:32.976633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.439 [2024-12-15 05:52:33.013122] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:11.439 [2024-12-15 05:52:33.013266] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.439 [2024-12-15 05:52:33.013277] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.439 [2024-12-15 05:52:33.013285] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.439 [2024-12-15 05:52:33.014210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.440 [2024-12-15 05:52:33.014301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.440 [2024-12-15 05:52:33.014451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.440 [2024-12-15 05:52:33.014455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.699 05:52:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:11.699 05:52:33 -- common/autotest_common.sh@862 -- # return 0 00:12:11.699 05:52:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:11.699 05:52:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:11.699 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.699 05:52:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:11.699 05:52:33 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:11.699 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.699 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.699 [2024-12-15 05:52:33.138962] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.699 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.699 05:52:33 -- target/multiconnection.sh@21 -- # seq 1 11 00:12:11.699 05:52:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:11.699 05:52:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:11.699 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.699 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.699 Malloc1 00:12:11.699 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.699 05:52:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:12:11.699 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.699 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.699 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.699 05:52:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:11.699 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.699 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.699 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.699 05:52:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:11.699 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.699 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.699 [2024-12-15 05:52:33.205170] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.699 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.699 05:52:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:11.699 05:52:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:12:11.699 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.699 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.699 Malloc2 00:12:11.699 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.699 05:52:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:11.699 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.699 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.699 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.699 05:52:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:12:11.699 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.699 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.699 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.699 05:52:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:11.699 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.699 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.699 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.699 05:52:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:11.699 05:52:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:12:11.699 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.699 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.699 Malloc3 00:12:11.699 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.699 05:52:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:12:11.699 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.699 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.699 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.699 05:52:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:12:11.699 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.699 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.699 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.699 05:52:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:11.699 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.699 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.699 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.699 05:52:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:11.699 05:52:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:12:11.699 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.699 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.699 Malloc4 00:12:11.699 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.699 05:52:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:12:11.700 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.700 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.700 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.700 05:52:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:12:11.700 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.700 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.700 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.700 05:52:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:11.700 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.700 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.700 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.700 05:52:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:11.700 05:52:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:12:11.700 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.700 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.959 Malloc5 00:12:11.959 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.959 05:52:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:12:11.959 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.959 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.959 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.959 05:52:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:12:11.959 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.959 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.959 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.959 05:52:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:12:11.959 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.959 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.959 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.959 05:52:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:11.959 05:52:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:12:11.959 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.959 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.959 Malloc6 00:12:11.959 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.959 05:52:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:12:11.959 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.959 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.959 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.959 05:52:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:12:11.959 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.960 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.960 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.960 05:52:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:12:11.960 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.960 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.960 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.960 05:52:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:11.960 05:52:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:12:11.960 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.960 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.960 Malloc7 00:12:11.960 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.960 05:52:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:12:11.960 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.960 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.960 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.960 05:52:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:12:11.960 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.960 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.960 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.960 05:52:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:12:11.960 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.960 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.960 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.960 05:52:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:11.960 05:52:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:12:11.960 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.960 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.960 Malloc8 00:12:11.960 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.960 05:52:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:12:11.960 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.960 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.960 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.960 05:52:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:12:11.960 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.960 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.960 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.960 05:52:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:12:11.960 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.960 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.960 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.960 05:52:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:11.960 05:52:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:12:11.960 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.960 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.960 Malloc9 00:12:11.960 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.960 05:52:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:12:11.960 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.960 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.960 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.960 05:52:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:12:11.960 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.960 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.960 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.960 05:52:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:12:11.960 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.960 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.960 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.960 05:52:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:11.960 05:52:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:12:11.960 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.960 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.960 Malloc10 00:12:11.960 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.960 05:52:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:12:11.960 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.960 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.960 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.960 05:52:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:12:11.960 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.960 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.960 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.960 05:52:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:12:11.960 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.960 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.960 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.960 05:52:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:11.960 05:52:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:12:11.960 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.960 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.960 Malloc11 00:12:11.960 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.960 05:52:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:12:11.960 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.960 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.960 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.960 05:52:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:12:11.960 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.960 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:12.219 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.219 05:52:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:12:12.219 05:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.219 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:12:12.219 05:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.219 05:52:33 -- target/multiconnection.sh@28 -- # seq 1 11 00:12:12.219 05:52:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:12.219 05:52:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 --hostid=926ec8f8-6baf-4857-8f2f-72d8639146f9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:12.219 05:52:33 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:12:12.219 05:52:33 -- common/autotest_common.sh@1187 -- # local i=0 00:12:12.219 05:52:33 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:12.219 05:52:33 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:12.219 05:52:33 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:14.123 05:52:35 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:14.382 05:52:35 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:14.382 05:52:35 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:12:14.382 05:52:35 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:14.382 05:52:35 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:14.382 05:52:35 -- common/autotest_common.sh@1197 -- # return 0 00:12:14.382 05:52:35 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:14.382 05:52:35 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 --hostid=926ec8f8-6baf-4857-8f2f-72d8639146f9 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:12:14.382 05:52:35 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:12:14.382 05:52:35 -- common/autotest_common.sh@1187 -- # local i=0 00:12:14.382 05:52:35 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:14.382 05:52:35 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:14.382 05:52:35 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:16.286 05:52:37 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:16.286 05:52:37 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:16.286 05:52:37 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:12:16.546 05:52:37 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:16.546 05:52:37 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:16.546 05:52:37 -- common/autotest_common.sh@1197 -- # return 0 00:12:16.546 05:52:37 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:16.546 05:52:37 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 --hostid=926ec8f8-6baf-4857-8f2f-72d8639146f9 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:12:16.546 05:52:38 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:12:16.546 05:52:38 -- common/autotest_common.sh@1187 -- # local i=0 00:12:16.546 05:52:38 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:16.546 05:52:38 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:16.546 05:52:38 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:18.450 05:52:40 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:18.450 05:52:40 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:18.450 05:52:40 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:12:18.709 05:52:40 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:18.709 05:52:40 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:18.709 05:52:40 -- common/autotest_common.sh@1197 -- # return 0 00:12:18.709 05:52:40 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:18.709 05:52:40 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 --hostid=926ec8f8-6baf-4857-8f2f-72d8639146f9 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:12:18.709 05:52:40 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:12:18.709 05:52:40 -- common/autotest_common.sh@1187 -- # local i=0 00:12:18.709 05:52:40 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:18.709 05:52:40 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:18.709 05:52:40 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:20.613 05:52:42 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:20.613 05:52:42 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:20.613 05:52:42 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:12:20.872 05:52:42 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:20.872 05:52:42 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:20.872 05:52:42 -- common/autotest_common.sh@1197 -- # return 0 00:12:20.872 05:52:42 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:20.872 05:52:42 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 --hostid=926ec8f8-6baf-4857-8f2f-72d8639146f9 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:12:20.872 05:52:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:12:20.872 05:52:42 -- common/autotest_common.sh@1187 -- # local i=0 00:12:20.872 05:52:42 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:20.872 05:52:42 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:20.872 05:52:42 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:22.777 05:52:44 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:22.777 05:52:44 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:22.777 05:52:44 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:12:23.036 05:52:44 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:23.036 05:52:44 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:23.036 05:52:44 -- common/autotest_common.sh@1197 -- # return 0 00:12:23.036 05:52:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:23.036 05:52:44 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 --hostid=926ec8f8-6baf-4857-8f2f-72d8639146f9 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:12:23.036 05:52:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:12:23.036 05:52:44 -- common/autotest_common.sh@1187 -- # local i=0 00:12:23.036 05:52:44 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:23.036 05:52:44 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:23.036 05:52:44 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:24.941 05:52:46 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:24.941 05:52:46 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:24.941 05:52:46 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:12:25.201 05:52:46 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:25.201 05:52:46 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:25.201 05:52:46 -- common/autotest_common.sh@1197 -- # return 0 00:12:25.201 05:52:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:25.201 05:52:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 --hostid=926ec8f8-6baf-4857-8f2f-72d8639146f9 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:12:25.201 05:52:46 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:12:25.201 05:52:46 -- common/autotest_common.sh@1187 -- # local i=0 00:12:25.201 05:52:46 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:25.201 05:52:46 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:25.201 05:52:46 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:27.107 05:52:48 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:27.107 05:52:48 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:27.107 05:52:48 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:12:27.366 05:52:48 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:27.366 05:52:48 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:27.366 05:52:48 -- common/autotest_common.sh@1197 -- # return 0 00:12:27.366 05:52:48 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:27.366 05:52:48 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 --hostid=926ec8f8-6baf-4857-8f2f-72d8639146f9 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:12:27.366 05:52:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:12:27.366 05:52:48 -- common/autotest_common.sh@1187 -- # local i=0 00:12:27.366 05:52:48 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:27.366 05:52:48 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:27.366 05:52:48 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:29.270 05:52:50 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:29.530 05:52:50 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:29.530 05:52:50 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:12:29.530 05:52:50 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:29.530 05:52:50 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:29.530 05:52:50 -- common/autotest_common.sh@1197 -- # return 0 00:12:29.530 05:52:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:29.530 05:52:50 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 --hostid=926ec8f8-6baf-4857-8f2f-72d8639146f9 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:12:29.530 05:52:51 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:12:29.530 05:52:51 -- common/autotest_common.sh@1187 -- # local i=0 00:12:29.530 05:52:51 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:29.530 05:52:51 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:29.530 05:52:51 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:32.063 05:52:53 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:32.063 05:52:53 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:32.063 05:52:53 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:12:32.063 05:52:53 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:32.063 05:52:53 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:32.064 05:52:53 -- common/autotest_common.sh@1197 -- # return 0 00:12:32.064 05:52:53 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:32.064 05:52:53 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 --hostid=926ec8f8-6baf-4857-8f2f-72d8639146f9 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:12:32.064 05:52:53 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:12:32.064 05:52:53 -- common/autotest_common.sh@1187 -- # local i=0 00:12:32.064 05:52:53 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:32.064 05:52:53 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:32.064 05:52:53 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:33.972 05:52:55 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:33.972 05:52:55 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:33.972 05:52:55 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:12:33.972 05:52:55 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:33.972 05:52:55 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.972 05:52:55 -- common/autotest_common.sh@1197 -- # return 0 00:12:33.972 05:52:55 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:33.972 05:52:55 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 --hostid=926ec8f8-6baf-4857-8f2f-72d8639146f9 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:12:33.972 05:52:55 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:12:33.972 05:52:55 -- common/autotest_common.sh@1187 -- # local i=0 00:12:33.972 05:52:55 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.972 05:52:55 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:33.972 05:52:55 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:35.875 05:52:57 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:35.875 05:52:57 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:35.875 05:52:57 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:12:35.875 05:52:57 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:35.875 05:52:57 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.875 05:52:57 -- common/autotest_common.sh@1197 -- # return 0 00:12:35.875 05:52:57 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:12:35.875 [global] 00:12:35.875 thread=1 00:12:35.875 invalidate=1 00:12:35.875 rw=read 00:12:35.875 time_based=1 00:12:35.875 runtime=10 00:12:35.875 ioengine=libaio 00:12:35.875 direct=1 00:12:35.875 bs=262144 00:12:35.875 iodepth=64 00:12:35.875 norandommap=1 00:12:35.875 numjobs=1 00:12:35.875 00:12:35.875 [job0] 00:12:35.875 filename=/dev/nvme0n1 00:12:35.875 [job1] 00:12:35.875 filename=/dev/nvme10n1 00:12:35.875 [job2] 00:12:35.875 filename=/dev/nvme1n1 00:12:35.875 [job3] 00:12:35.875 filename=/dev/nvme2n1 00:12:35.875 [job4] 00:12:35.875 filename=/dev/nvme3n1 00:12:35.875 [job5] 00:12:35.875 filename=/dev/nvme4n1 00:12:35.875 [job6] 00:12:35.875 filename=/dev/nvme5n1 00:12:35.875 [job7] 00:12:35.875 filename=/dev/nvme6n1 00:12:35.875 [job8] 00:12:35.875 filename=/dev/nvme7n1 00:12:35.875 [job9] 00:12:35.875 filename=/dev/nvme8n1 00:12:35.875 [job10] 00:12:35.875 filename=/dev/nvme9n1 00:12:36.134 Could not set queue depth (nvme0n1) 00:12:36.134 Could not set queue depth (nvme10n1) 00:12:36.134 Could not set queue depth (nvme1n1) 00:12:36.134 Could not set queue depth (nvme2n1) 00:12:36.134 Could not set queue depth (nvme3n1) 00:12:36.134 Could not set queue depth (nvme4n1) 00:12:36.134 Could not set queue depth (nvme5n1) 00:12:36.134 Could not set queue depth (nvme6n1) 00:12:36.134 Could not set queue depth (nvme7n1) 00:12:36.134 Could not set queue depth (nvme8n1) 00:12:36.134 Could not set queue depth (nvme9n1) 00:12:36.134 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:36.134 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:36.134 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:36.134 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:36.134 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:36.134 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:36.134 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:36.134 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:36.134 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:36.134 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:36.134 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:36.134 fio-3.35 00:12:36.134 Starting 11 threads 00:12:48.345 00:12:48.345 job0: (groupid=0, jobs=1): err= 0: pid=78458: Sun Dec 15 05:53:08 2024 00:12:48.345 read: IOPS=538, BW=135MiB/s (141MB/s)(1359MiB/10093msec) 00:12:48.345 slat (usec): min=21, max=53372, avg=1836.25, stdev=4162.95 00:12:48.345 clat (msec): min=10, max=206, avg=116.90, stdev=10.82 00:12:48.345 lat (msec): min=11, max=206, avg=118.73, stdev=11.18 00:12:48.345 clat percentiles (msec): 00:12:48.345 | 1.00th=[ 61], 5.00th=[ 109], 10.00th=[ 111], 20.00th=[ 113], 00:12:48.345 | 30.00th=[ 115], 40.00th=[ 116], 50.00th=[ 117], 60.00th=[ 118], 00:12:48.345 | 70.00th=[ 121], 80.00th=[ 122], 90.00th=[ 126], 95.00th=[ 128], 00:12:48.345 | 99.00th=[ 140], 99.50th=[ 155], 99.90th=[ 205], 99.95th=[ 207], 00:12:48.345 | 99.99th=[ 207] 00:12:48.345 bw ( KiB/s): min=131584, max=143360, per=6.58%, avg=137443.00, stdev=3489.58, samples=20 00:12:48.345 iops : min= 514, max= 560, avg=536.70, stdev=13.71, samples=20 00:12:48.345 lat (msec) : 20=0.15%, 100=1.95%, 250=97.90% 00:12:48.345 cpu : usr=0.26%, sys=1.93%, ctx=1362, majf=0, minf=4097 00:12:48.345 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:48.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:48.345 issued rwts: total=5434,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:48.345 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:48.345 job1: (groupid=0, jobs=1): err= 0: pid=78459: Sun Dec 15 05:53:08 2024 00:12:48.345 read: IOPS=633, BW=158MiB/s (166MB/s)(1597MiB/10074msec) 00:12:48.345 slat (usec): min=21, max=37056, avg=1548.88, stdev=3464.26 00:12:48.345 clat (msec): min=32, max=152, avg=99.28, stdev=12.92 00:12:48.345 lat (msec): min=32, max=152, avg=100.83, stdev=13.16 00:12:48.345 clat percentiles (msec): 00:12:48.345 | 1.00th=[ 82], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 89], 00:12:48.345 | 30.00th=[ 91], 40.00th=[ 92], 50.00th=[ 95], 60.00th=[ 99], 00:12:48.345 | 70.00th=[ 107], 80.00th=[ 114], 90.00th=[ 118], 95.00th=[ 122], 00:12:48.345 | 99.00th=[ 129], 99.50th=[ 132], 99.90th=[ 150], 99.95th=[ 150], 00:12:48.345 | 99.99th=[ 153] 00:12:48.345 bw ( KiB/s): min=132360, max=182637, per=7.74%, avg=161856.20, stdev=19881.88, samples=20 00:12:48.345 iops : min= 517, max= 713, avg=632.15, stdev=77.67, samples=20 00:12:48.345 lat (msec) : 50=0.11%, 100=63.22%, 250=36.67% 00:12:48.345 cpu : usr=0.32%, sys=2.15%, ctx=1557, majf=0, minf=4097 00:12:48.345 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:12:48.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:48.345 issued rwts: total=6386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:48.345 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:48.345 job2: (groupid=0, jobs=1): err= 0: pid=78460: Sun Dec 15 05:53:08 2024 00:12:48.345 read: IOPS=845, BW=211MiB/s (222MB/s)(2118MiB/10022msec) 00:12:48.345 slat (usec): min=18, max=46642, avg=1163.94, stdev=2795.57 00:12:48.345 clat (msec): min=4, max=155, avg=74.43, stdev=23.05 00:12:48.345 lat (msec): min=4, max=155, avg=75.59, stdev=23.44 00:12:48.345 clat percentiles (msec): 00:12:48.345 | 1.00th=[ 39], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 60], 00:12:48.345 | 30.00th=[ 62], 40.00th=[ 64], 50.00th=[ 66], 60.00th=[ 68], 00:12:48.345 | 70.00th=[ 71], 80.00th=[ 108], 90.00th=[ 116], 95.00th=[ 121], 00:12:48.345 | 99.00th=[ 128], 99.50th=[ 131], 99.90th=[ 138], 99.95th=[ 138], 00:12:48.345 | 99.99th=[ 157] 00:12:48.345 bw ( KiB/s): min=132343, max=263680, per=10.30%, avg=215167.55, stdev=54589.17, samples=20 00:12:48.345 iops : min= 516, max= 1030, avg=840.40, stdev=213.38, samples=20 00:12:48.345 lat (msec) : 10=0.14%, 20=0.25%, 50=1.87%, 100=76.44%, 250=21.31% 00:12:48.345 cpu : usr=0.49%, sys=3.16%, ctx=1981, majf=0, minf=4097 00:12:48.345 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:48.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:48.345 issued rwts: total=8471,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:48.345 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:48.345 job3: (groupid=0, jobs=1): err= 0: pid=78461: Sun Dec 15 05:53:08 2024 00:12:48.345 read: IOPS=640, BW=160MiB/s (168MB/s)(1613MiB/10074msec) 00:12:48.345 slat (usec): min=19, max=63696, avg=1545.30, stdev=3546.82 00:12:48.345 clat (msec): min=67, max=161, avg=98.27, stdev=13.15 00:12:48.345 lat (msec): min=68, max=172, avg=99.82, stdev=13.48 00:12:48.345 clat percentiles (msec): 00:12:48.345 | 1.00th=[ 77], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 88], 00:12:48.345 | 30.00th=[ 90], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 96], 00:12:48.345 | 70.00th=[ 106], 80.00th=[ 114], 90.00th=[ 118], 95.00th=[ 122], 00:12:48.345 | 99.00th=[ 128], 99.50th=[ 132], 99.90th=[ 155], 99.95th=[ 155], 00:12:48.345 | 99.99th=[ 161] 00:12:48.345 bw ( KiB/s): min=131334, max=185344, per=7.82%, avg=163518.40, stdev=19538.10, samples=20 00:12:48.345 iops : min= 513, max= 724, avg=638.60, stdev=76.39, samples=20 00:12:48.345 lat (msec) : 100=65.70%, 250=34.30% 00:12:48.345 cpu : usr=0.30%, sys=2.46%, ctx=1595, majf=0, minf=4097 00:12:48.345 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:12:48.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:48.345 issued rwts: total=6451,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:48.345 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:48.345 job4: (groupid=0, jobs=1): err= 0: pid=78462: Sun Dec 15 05:53:08 2024 00:12:48.345 read: IOPS=539, BW=135MiB/s (141MB/s)(1361MiB/10093msec) 00:12:48.345 slat (usec): min=22, max=36861, avg=1832.20, stdev=4038.18 00:12:48.345 clat (msec): min=24, max=210, avg=116.62, stdev=11.05 00:12:48.345 lat (msec): min=25, max=214, avg=118.45, stdev=11.45 00:12:48.345 clat percentiles (msec): 00:12:48.345 | 1.00th=[ 67], 5.00th=[ 108], 10.00th=[ 111], 20.00th=[ 113], 00:12:48.345 | 30.00th=[ 115], 40.00th=[ 116], 50.00th=[ 117], 60.00th=[ 118], 00:12:48.345 | 70.00th=[ 120], 80.00th=[ 122], 90.00th=[ 125], 95.00th=[ 128], 00:12:48.345 | 99.00th=[ 138], 99.50th=[ 155], 99.90th=[ 203], 99.95th=[ 211], 00:12:48.345 | 99.99th=[ 211] 00:12:48.345 bw ( KiB/s): min=131072, max=146432, per=6.59%, avg=137750.70, stdev=3871.72, samples=20 00:12:48.345 iops : min= 512, max= 572, avg=537.95, stdev=15.07, samples=20 00:12:48.345 lat (msec) : 50=0.72%, 100=2.20%, 250=97.08% 00:12:48.345 cpu : usr=0.31%, sys=2.09%, ctx=1328, majf=0, minf=4097 00:12:48.345 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:48.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:48.345 issued rwts: total=5445,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:48.345 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:48.345 job5: (groupid=0, jobs=1): err= 0: pid=78463: Sun Dec 15 05:53:08 2024 00:12:48.345 read: IOPS=642, BW=161MiB/s (168MB/s)(1618MiB/10077msec) 00:12:48.345 slat (usec): min=21, max=27493, avg=1539.86, stdev=3290.56 00:12:48.345 clat (msec): min=28, max=157, avg=97.91, stdev=13.58 00:12:48.345 lat (msec): min=29, max=167, avg=99.45, stdev=13.89 00:12:48.345 clat percentiles (msec): 00:12:48.345 | 1.00th=[ 74], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 88], 00:12:48.345 | 30.00th=[ 90], 40.00th=[ 92], 50.00th=[ 94], 60.00th=[ 96], 00:12:48.345 | 70.00th=[ 105], 80.00th=[ 113], 90.00th=[ 117], 95.00th=[ 121], 00:12:48.345 | 99.00th=[ 128], 99.50th=[ 131], 99.90th=[ 153], 99.95th=[ 157], 00:12:48.345 | 99.99th=[ 157] 00:12:48.345 bw ( KiB/s): min=134656, max=185856, per=7.85%, avg=164032.05, stdev=18753.44, samples=20 00:12:48.345 iops : min= 526, max= 726, avg=640.60, stdev=73.32, samples=20 00:12:48.345 lat (msec) : 50=0.74%, 100=65.29%, 250=33.97% 00:12:48.345 cpu : usr=0.28%, sys=2.77%, ctx=1572, majf=0, minf=4097 00:12:48.345 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:12:48.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:48.345 issued rwts: total=6473,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:48.345 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:48.345 job6: (groupid=0, jobs=1): err= 0: pid=78464: Sun Dec 15 05:53:08 2024 00:12:48.345 read: IOPS=1306, BW=327MiB/s (342MB/s)(3273MiB/10020msec) 00:12:48.345 slat (usec): min=21, max=16065, avg=758.48, stdev=1699.89 00:12:48.345 clat (usec): min=10328, max=87249, avg=48166.43, stdev=16051.33 00:12:48.345 lat (usec): min=11786, max=92554, avg=48924.91, stdev=16291.44 00:12:48.345 clat percentiles (usec): 00:12:48.346 | 1.00th=[27919], 5.00th=[30278], 10.00th=[31065], 20.00th=[32375], 00:12:48.346 | 30.00th=[33162], 40.00th=[34341], 50.00th=[49021], 60.00th=[58459], 00:12:48.346 | 70.00th=[62129], 80.00th=[64750], 90.00th=[68682], 95.00th=[70779], 00:12:48.346 | 99.00th=[74974], 99.50th=[77071], 99.90th=[81265], 99.95th=[83362], 00:12:48.346 | 99.99th=[87557] 00:12:48.346 bw ( KiB/s): min=242203, max=502802, per=15.95%, avg=333341.35, stdev=112408.37, samples=20 00:12:48.346 iops : min= 946, max= 1964, avg=1302.10, stdev=439.09, samples=20 00:12:48.346 lat (msec) : 20=0.23%, 50=49.92%, 100=49.85% 00:12:48.346 cpu : usr=0.67%, sys=4.10%, ctx=2868, majf=0, minf=4097 00:12:48.346 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:12:48.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:48.346 issued rwts: total=13090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:48.346 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:48.346 job7: (groupid=0, jobs=1): err= 0: pid=78465: Sun Dec 15 05:53:08 2024 00:12:48.346 read: IOPS=989, BW=247MiB/s (259MB/s)(2482MiB/10030msec) 00:12:48.346 slat (usec): min=21, max=40175, avg=1002.31, stdev=2228.74 00:12:48.346 clat (msec): min=24, max=110, avg=63.53, stdev= 6.72 00:12:48.346 lat (msec): min=25, max=123, avg=64.53, stdev= 6.72 00:12:48.346 clat percentiles (msec): 00:12:48.346 | 1.00th=[ 51], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 59], 00:12:48.346 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 64], 60.00th=[ 65], 00:12:48.346 | 70.00th=[ 66], 80.00th=[ 68], 90.00th=[ 71], 95.00th=[ 74], 00:12:48.346 | 99.00th=[ 84], 99.50th=[ 103], 99.90th=[ 110], 99.95th=[ 111], 00:12:48.346 | 99.99th=[ 111] 00:12:48.346 bw ( KiB/s): min=200593, max=264192, per=12.08%, avg=252461.50, stdev=13222.90, samples=20 00:12:48.346 iops : min= 783, max= 1032, avg=986.10, stdev=51.73, samples=20 00:12:48.346 lat (msec) : 50=0.96%, 100=98.49%, 250=0.55% 00:12:48.346 cpu : usr=0.69%, sys=3.51%, ctx=2211, majf=0, minf=4097 00:12:48.346 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:48.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:48.346 issued rwts: total=9928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:48.346 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:48.346 job8: (groupid=0, jobs=1): err= 0: pid=78466: Sun Dec 15 05:53:08 2024 00:12:48.346 read: IOPS=533, BW=133MiB/s (140MB/s)(1345MiB/10089msec) 00:12:48.346 slat (usec): min=21, max=54621, avg=1833.46, stdev=4174.90 00:12:48.346 clat (msec): min=77, max=209, avg=117.98, stdev= 7.87 00:12:48.346 lat (msec): min=77, max=209, avg=119.82, stdev= 8.28 00:12:48.346 clat percentiles (msec): 00:12:48.346 | 1.00th=[ 103], 5.00th=[ 110], 10.00th=[ 111], 20.00th=[ 114], 00:12:48.346 | 30.00th=[ 115], 40.00th=[ 116], 50.00th=[ 117], 60.00th=[ 120], 00:12:48.346 | 70.00th=[ 121], 80.00th=[ 123], 90.00th=[ 126], 95.00th=[ 129], 00:12:48.346 | 99.00th=[ 140], 99.50th=[ 163], 99.90th=[ 201], 99.95th=[ 211], 00:12:48.346 | 99.99th=[ 211] 00:12:48.346 bw ( KiB/s): min=128000, max=145408, per=6.51%, avg=136073.05, stdev=3802.08, samples=20 00:12:48.346 iops : min= 500, max= 568, avg=531.35, stdev=14.78, samples=20 00:12:48.346 lat (msec) : 100=0.76%, 250=99.24% 00:12:48.346 cpu : usr=0.21%, sys=2.18%, ctx=1333, majf=0, minf=4097 00:12:48.346 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:48.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:48.346 issued rwts: total=5380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:48.346 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:48.346 job9: (groupid=0, jobs=1): err= 0: pid=78467: Sun Dec 15 05:53:08 2024 00:12:48.346 read: IOPS=534, BW=134MiB/s (140MB/s)(1347MiB/10088msec) 00:12:48.346 slat (usec): min=18, max=60666, avg=1848.36, stdev=4228.85 00:12:48.346 clat (msec): min=65, max=211, avg=117.86, stdev= 9.60 00:12:48.346 lat (msec): min=66, max=211, avg=119.71, stdev= 9.96 00:12:48.346 clat percentiles (msec): 00:12:48.346 | 1.00th=[ 72], 5.00th=[ 109], 10.00th=[ 112], 20.00th=[ 114], 00:12:48.346 | 30.00th=[ 115], 40.00th=[ 117], 50.00th=[ 118], 60.00th=[ 120], 00:12:48.346 | 70.00th=[ 121], 80.00th=[ 123], 90.00th=[ 126], 95.00th=[ 129], 00:12:48.346 | 99.00th=[ 142], 99.50th=[ 159], 99.90th=[ 199], 99.95th=[ 199], 00:12:48.346 | 99.99th=[ 213] 00:12:48.346 bw ( KiB/s): min=131072, max=142108, per=6.52%, avg=136320.60, stdev=3382.87, samples=20 00:12:48.346 iops : min= 512, max= 555, avg=532.40, stdev=13.22, samples=20 00:12:48.346 lat (msec) : 100=2.15%, 250=97.85% 00:12:48.346 cpu : usr=0.23%, sys=2.04%, ctx=1305, majf=0, minf=4097 00:12:48.346 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:48.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:48.346 issued rwts: total=5388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:48.346 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:48.346 job10: (groupid=0, jobs=1): err= 0: pid=78468: Sun Dec 15 05:53:08 2024 00:12:48.346 read: IOPS=992, BW=248MiB/s (260MB/s)(2487MiB/10025msec) 00:12:48.346 slat (usec): min=21, max=49922, avg=1000.18, stdev=2211.10 00:12:48.346 clat (msec): min=22, max=121, avg=63.40, stdev= 6.98 00:12:48.346 lat (msec): min=29, max=121, avg=64.40, stdev= 6.99 00:12:48.346 clat percentiles (msec): 00:12:48.346 | 1.00th=[ 47], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 59], 00:12:48.346 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 64], 60.00th=[ 65], 00:12:48.346 | 70.00th=[ 66], 80.00th=[ 68], 90.00th=[ 71], 95.00th=[ 73], 00:12:48.346 | 99.00th=[ 82], 99.50th=[ 104], 99.90th=[ 118], 99.95th=[ 121], 00:12:48.346 | 99.99th=[ 122] 00:12:48.346 bw ( KiB/s): min=205723, max=267241, per=12.11%, avg=253021.65, stdev=12849.88, samples=20 00:12:48.346 iops : min= 803, max= 1043, avg=988.15, stdev=50.23, samples=20 00:12:48.346 lat (msec) : 50=1.29%, 100=98.16%, 250=0.55% 00:12:48.346 cpu : usr=0.51%, sys=3.95%, ctx=2150, majf=0, minf=4097 00:12:48.346 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:48.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:48.346 issued rwts: total=9949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:48.346 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:48.346 00:12:48.346 Run status group 0 (all jobs): 00:12:48.346 READ: bw=2041MiB/s (2140MB/s), 133MiB/s-327MiB/s (140MB/s-342MB/s), io=20.1GiB (21.6GB), run=10020-10093msec 00:12:48.346 00:12:48.346 Disk stats (read/write): 00:12:48.346 nvme0n1: ios=10760/0, merge=0/0, ticks=1228149/0, in_queue=1228149, util=97.91% 00:12:48.346 nvme10n1: ios=12650/0, merge=0/0, ticks=1229538/0, in_queue=1229538, util=97.86% 00:12:48.346 nvme1n1: ios=16862/0, merge=0/0, ticks=1234632/0, in_queue=1234632, util=98.16% 00:12:48.346 nvme2n1: ios=12797/0, merge=0/0, ticks=1230317/0, in_queue=1230317, util=98.25% 00:12:48.346 nvme3n1: ios=10781/0, merge=0/0, ticks=1227381/0, in_queue=1227381, util=98.28% 00:12:48.346 nvme4n1: ios=12839/0, merge=0/0, ticks=1229677/0, in_queue=1229677, util=98.49% 00:12:48.346 nvme5n1: ios=26118/0, merge=0/0, ticks=1238828/0, in_queue=1238828, util=98.65% 00:12:48.346 nvme6n1: ios=19763/0, merge=0/0, ticks=1235786/0, in_queue=1235786, util=98.63% 00:12:48.346 nvme7n1: ios=10646/0, merge=0/0, ticks=1227243/0, in_queue=1227243, util=98.90% 00:12:48.346 nvme8n1: ios=10672/0, merge=0/0, ticks=1227971/0, in_queue=1227971, util=99.05% 00:12:48.346 nvme9n1: ios=19800/0, merge=0/0, ticks=1236354/0, in_queue=1236354, util=99.06% 00:12:48.346 05:53:08 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:12:48.346 [global] 00:12:48.346 thread=1 00:12:48.346 invalidate=1 00:12:48.346 rw=randwrite 00:12:48.346 time_based=1 00:12:48.346 runtime=10 00:12:48.346 ioengine=libaio 00:12:48.346 direct=1 00:12:48.346 bs=262144 00:12:48.346 iodepth=64 00:12:48.346 norandommap=1 00:12:48.346 numjobs=1 00:12:48.346 00:12:48.346 [job0] 00:12:48.346 filename=/dev/nvme0n1 00:12:48.346 [job1] 00:12:48.346 filename=/dev/nvme10n1 00:12:48.346 [job2] 00:12:48.346 filename=/dev/nvme1n1 00:12:48.346 [job3] 00:12:48.346 filename=/dev/nvme2n1 00:12:48.346 [job4] 00:12:48.346 filename=/dev/nvme3n1 00:12:48.346 [job5] 00:12:48.346 filename=/dev/nvme4n1 00:12:48.346 [job6] 00:12:48.346 filename=/dev/nvme5n1 00:12:48.346 [job7] 00:12:48.346 filename=/dev/nvme6n1 00:12:48.346 [job8] 00:12:48.346 filename=/dev/nvme7n1 00:12:48.346 [job9] 00:12:48.346 filename=/dev/nvme8n1 00:12:48.346 [job10] 00:12:48.346 filename=/dev/nvme9n1 00:12:48.346 Could not set queue depth (nvme0n1) 00:12:48.346 Could not set queue depth (nvme10n1) 00:12:48.346 Could not set queue depth (nvme1n1) 00:12:48.346 Could not set queue depth (nvme2n1) 00:12:48.346 Could not set queue depth (nvme3n1) 00:12:48.346 Could not set queue depth (nvme4n1) 00:12:48.346 Could not set queue depth (nvme5n1) 00:12:48.346 Could not set queue depth (nvme6n1) 00:12:48.346 Could not set queue depth (nvme7n1) 00:12:48.346 Could not set queue depth (nvme8n1) 00:12:48.347 Could not set queue depth (nvme9n1) 00:12:48.347 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:48.347 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:48.347 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:48.347 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:48.347 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:48.347 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:48.347 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:48.347 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:48.347 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:48.347 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:48.347 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:48.347 fio-3.35 00:12:48.347 Starting 11 threads 00:12:58.409 00:12:58.409 job0: (groupid=0, jobs=1): err= 0: pid=78667: Sun Dec 15 05:53:18 2024 00:12:58.409 write: IOPS=681, BW=170MiB/s (179MB/s)(1719MiB/10090msec); 0 zone resets 00:12:58.409 slat (usec): min=17, max=13456, avg=1448.43, stdev=2443.94 00:12:58.409 clat (msec): min=15, max=173, avg=92.44, stdev= 6.48 00:12:58.409 lat (msec): min=15, max=173, avg=93.89, stdev= 6.10 00:12:58.409 clat percentiles (msec): 00:12:58.409 | 1.00th=[ 85], 5.00th=[ 88], 10.00th=[ 88], 20.00th=[ 89], 00:12:58.409 | 30.00th=[ 91], 40.00th=[ 93], 50.00th=[ 93], 60.00th=[ 94], 00:12:58.409 | 70.00th=[ 94], 80.00th=[ 95], 90.00th=[ 95], 95.00th=[ 96], 00:12:58.409 | 99.00th=[ 112], 99.50th=[ 129], 99.90th=[ 163], 99.95th=[ 169], 00:12:58.409 | 99.99th=[ 174] 00:12:58.409 bw ( KiB/s): min=164168, max=180224, per=11.80%, avg=174429.20, stdev=2935.30, samples=20 00:12:58.409 iops : min= 641, max= 704, avg=681.35, stdev=11.52, samples=20 00:12:58.409 lat (msec) : 20=0.06%, 50=0.23%, 100=98.25%, 250=1.45% 00:12:58.409 cpu : usr=1.39%, sys=1.94%, ctx=8032, majf=0, minf=1 00:12:58.409 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:58.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:58.409 issued rwts: total=0,6876,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.409 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:58.409 job1: (groupid=0, jobs=1): err= 0: pid=78668: Sun Dec 15 05:53:18 2024 00:12:58.409 write: IOPS=409, BW=102MiB/s (107MB/s)(1039MiB/10145msec); 0 zone resets 00:12:58.409 slat (usec): min=20, max=63073, avg=2401.73, stdev=4226.77 00:12:58.409 clat (msec): min=65, max=303, avg=153.75, stdev=19.47 00:12:58.409 lat (msec): min=65, max=303, avg=156.15, stdev=19.32 00:12:58.409 clat percentiles (msec): 00:12:58.409 | 1.00th=[ 103], 5.00th=[ 122], 10.00th=[ 126], 20.00th=[ 131], 00:12:58.409 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 163], 60.00th=[ 163], 00:12:58.409 | 70.00th=[ 163], 80.00th=[ 165], 90.00th=[ 165], 95.00th=[ 167], 00:12:58.409 | 99.00th=[ 194], 99.50th=[ 249], 99.90th=[ 292], 99.95th=[ 292], 00:12:58.409 | 99.99th=[ 305] 00:12:58.409 bw ( KiB/s): min=98304, max=131072, per=7.09%, avg=104780.80, stdev=9911.74, samples=20 00:12:58.409 iops : min= 384, max= 512, avg=409.30, stdev=38.72, samples=20 00:12:58.409 lat (msec) : 100=0.84%, 250=98.72%, 500=0.43% 00:12:58.409 cpu : usr=0.71%, sys=1.42%, ctx=6317, majf=0, minf=1 00:12:58.409 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:12:58.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:58.409 issued rwts: total=0,4156,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.409 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:58.409 job2: (groupid=0, jobs=1): err= 0: pid=78680: Sun Dec 15 05:53:18 2024 00:12:58.409 write: IOPS=416, BW=104MiB/s (109MB/s)(1057MiB/10149msec); 0 zone resets 00:12:58.409 slat (usec): min=18, max=19728, avg=2335.26, stdev=4088.63 00:12:58.409 clat (msec): min=19, max=309, avg=151.27, stdev=24.30 00:12:58.409 lat (msec): min=19, max=309, avg=153.60, stdev=24.38 00:12:58.409 clat percentiles (msec): 00:12:58.409 | 1.00th=[ 53], 5.00th=[ 118], 10.00th=[ 123], 20.00th=[ 129], 00:12:58.409 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 163], 60.00th=[ 163], 00:12:58.409 | 70.00th=[ 163], 80.00th=[ 165], 90.00th=[ 165], 95.00th=[ 167], 00:12:58.409 | 99.00th=[ 190], 99.50th=[ 253], 99.90th=[ 300], 99.95th=[ 300], 00:12:58.409 | 99.99th=[ 309] 00:12:58.409 bw ( KiB/s): min=98304, max=142336, per=7.21%, avg=106585.50, stdev=13499.98, samples=20 00:12:58.409 iops : min= 384, max= 556, avg=416.30, stdev=52.65, samples=20 00:12:58.409 lat (msec) : 20=0.09%, 50=0.88%, 100=1.28%, 250=97.23%, 500=0.52% 00:12:58.409 cpu : usr=0.78%, sys=1.35%, ctx=5518, majf=0, minf=1 00:12:58.409 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:12:58.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:58.409 issued rwts: total=0,4227,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.409 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:58.409 job3: (groupid=0, jobs=1): err= 0: pid=78681: Sun Dec 15 05:53:18 2024 00:12:58.409 write: IOPS=490, BW=123MiB/s (129MB/s)(1240MiB/10113msec); 0 zone resets 00:12:58.409 slat (usec): min=14, max=59736, avg=2011.75, stdev=3512.48 00:12:58.409 clat (msec): min=61, max=233, avg=128.49, stdev= 8.65 00:12:58.409 lat (msec): min=61, max=233, avg=130.50, stdev= 8.06 00:12:58.409 clat percentiles (msec): 00:12:58.409 | 1.00th=[ 113], 5.00th=[ 122], 10.00th=[ 123], 20.00th=[ 124], 00:12:58.409 | 30.00th=[ 128], 40.00th=[ 130], 50.00th=[ 130], 60.00th=[ 131], 00:12:58.409 | 70.00th=[ 131], 80.00th=[ 132], 90.00th=[ 133], 95.00th=[ 133], 00:12:58.409 | 99.00th=[ 148], 99.50th=[ 186], 99.90th=[ 226], 99.95th=[ 226], 00:12:58.409 | 99.99th=[ 234] 00:12:58.409 bw ( KiB/s): min=118784, max=131072, per=8.48%, avg=125312.00, stdev=2240.24, samples=20 00:12:58.409 iops : min= 464, max= 512, avg=489.50, stdev= 8.75, samples=20 00:12:58.409 lat (msec) : 100=0.67%, 250=99.33% 00:12:58.409 cpu : usr=0.90%, sys=1.32%, ctx=6732, majf=0, minf=1 00:12:58.409 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:12:58.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:58.409 issued rwts: total=0,4958,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.409 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:58.409 job4: (groupid=0, jobs=1): err= 0: pid=78682: Sun Dec 15 05:53:18 2024 00:12:58.409 write: IOPS=654, BW=164MiB/s (172MB/s)(1650MiB/10086msec); 0 zone resets 00:12:58.409 slat (usec): min=18, max=32284, avg=1509.64, stdev=2617.81 00:12:58.409 clat (msec): min=36, max=170, avg=96.26, stdev=13.44 00:12:58.409 lat (msec): min=36, max=170, avg=97.77, stdev=13.40 00:12:58.409 clat percentiles (msec): 00:12:58.409 | 1.00th=[ 85], 5.00th=[ 87], 10.00th=[ 87], 20.00th=[ 89], 00:12:58.409 | 30.00th=[ 92], 40.00th=[ 92], 50.00th=[ 93], 60.00th=[ 93], 00:12:58.409 | 70.00th=[ 94], 80.00th=[ 94], 90.00th=[ 123], 95.00th=[ 128], 00:12:58.409 | 99.00th=[ 146], 99.50th=[ 150], 99.90th=[ 161], 99.95th=[ 165], 00:12:58.409 | 99.99th=[ 171] 00:12:58.409 bw ( KiB/s): min=114688, max=180224, per=11.32%, avg=167347.20, stdev=20400.54, samples=20 00:12:58.409 iops : min= 448, max= 704, avg=653.70, stdev=79.69, samples=20 00:12:58.409 lat (msec) : 50=0.12%, 100=84.82%, 250=15.06% 00:12:58.409 cpu : usr=1.30%, sys=1.74%, ctx=7801, majf=0, minf=1 00:12:58.409 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:12:58.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:58.409 issued rwts: total=0,6600,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.409 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:58.409 job5: (groupid=0, jobs=1): err= 0: pid=78683: Sun Dec 15 05:53:18 2024 00:12:58.410 write: IOPS=656, BW=164MiB/s (172MB/s)(1656MiB/10084msec); 0 zone resets 00:12:58.410 slat (usec): min=18, max=23034, avg=1504.94, stdev=2587.02 00:12:58.410 clat (msec): min=15, max=173, avg=95.93, stdev=13.24 00:12:58.410 lat (msec): min=15, max=173, avg=97.43, stdev=13.20 00:12:58.410 clat percentiles (msec): 00:12:58.410 | 1.00th=[ 85], 5.00th=[ 87], 10.00th=[ 87], 20.00th=[ 89], 00:12:58.410 | 30.00th=[ 92], 40.00th=[ 92], 50.00th=[ 93], 60.00th=[ 93], 00:12:58.410 | 70.00th=[ 94], 80.00th=[ 94], 90.00th=[ 123], 95.00th=[ 127], 00:12:58.410 | 99.00th=[ 131], 99.50th=[ 140], 99.90th=[ 163], 99.95th=[ 169], 00:12:58.410 | 99.99th=[ 174] 00:12:58.410 bw ( KiB/s): min=129024, max=178688, per=11.36%, avg=167923.30, stdev=18483.07, samples=20 00:12:58.410 iops : min= 504, max= 698, avg=655.95, stdev=72.20, samples=20 00:12:58.410 lat (msec) : 20=0.06%, 50=0.30%, 100=84.45%, 250=15.19% 00:12:58.410 cpu : usr=0.97%, sys=1.68%, ctx=9004, majf=0, minf=1 00:12:58.410 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:12:58.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:58.410 issued rwts: total=0,6622,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.410 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:58.410 job6: (groupid=0, jobs=1): err= 0: pid=78684: Sun Dec 15 05:53:18 2024 00:12:58.410 write: IOPS=493, BW=123MiB/s (129MB/s)(1248MiB/10115msec); 0 zone resets 00:12:58.410 slat (usec): min=17, max=26481, avg=1997.09, stdev=3419.49 00:12:58.410 clat (msec): min=3, max=239, avg=127.66, stdev=13.19 00:12:58.410 lat (msec): min=3, max=240, avg=129.65, stdev=12.95 00:12:58.410 clat percentiles (msec): 00:12:58.410 | 1.00th=[ 53], 5.00th=[ 121], 10.00th=[ 123], 20.00th=[ 124], 00:12:58.410 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 130], 60.00th=[ 131], 00:12:58.410 | 70.00th=[ 131], 80.00th=[ 132], 90.00th=[ 133], 95.00th=[ 133], 00:12:58.410 | 99.00th=[ 146], 99.50th=[ 192], 99.90th=[ 232], 99.95th=[ 232], 00:12:58.410 | 99.99th=[ 241] 00:12:58.410 bw ( KiB/s): min=122880, max=136192, per=8.53%, avg=126156.80, stdev=2850.21, samples=20 00:12:58.410 iops : min= 480, max= 532, avg=492.80, stdev=11.13, samples=20 00:12:58.410 lat (msec) : 4=0.04%, 20=0.16%, 50=0.72%, 100=0.66%, 250=98.42% 00:12:58.410 cpu : usr=1.02%, sys=1.53%, ctx=5524, majf=0, minf=1 00:12:58.410 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:12:58.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:58.410 issued rwts: total=0,4991,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.410 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:58.410 job7: (groupid=0, jobs=1): err= 0: pid=78685: Sun Dec 15 05:53:18 2024 00:12:58.410 write: IOPS=411, BW=103MiB/s (108MB/s)(1044MiB/10148msec); 0 zone resets 00:12:58.410 slat (usec): min=19, max=46741, avg=2389.43, stdev=4180.29 00:12:58.410 clat (msec): min=14, max=309, avg=153.01, stdev=22.99 00:12:58.410 lat (msec): min=14, max=309, avg=155.40, stdev=22.96 00:12:58.410 clat percentiles (msec): 00:12:58.410 | 1.00th=[ 50], 5.00th=[ 121], 10.00th=[ 126], 20.00th=[ 131], 00:12:58.410 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 163], 60.00th=[ 163], 00:12:58.410 | 70.00th=[ 163], 80.00th=[ 165], 90.00th=[ 165], 95.00th=[ 167], 00:12:58.410 | 99.00th=[ 201], 99.50th=[ 253], 99.90th=[ 300], 99.95th=[ 300], 00:12:58.410 | 99.99th=[ 309] 00:12:58.410 bw ( KiB/s): min=98304, max=131072, per=7.12%, avg=105292.80, stdev=10990.31, samples=20 00:12:58.410 iops : min= 384, max= 512, avg=411.30, stdev=42.93, samples=20 00:12:58.410 lat (msec) : 20=0.19%, 50=0.86%, 100=0.38%, 250=98.04%, 500=0.53% 00:12:58.410 cpu : usr=0.92%, sys=1.21%, ctx=3067, majf=0, minf=1 00:12:58.410 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:12:58.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:58.410 issued rwts: total=0,4176,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.410 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:58.410 job8: (groupid=0, jobs=1): err= 0: pid=78686: Sun Dec 15 05:53:18 2024 00:12:58.410 write: IOPS=493, BW=123MiB/s (129MB/s)(1249MiB/10120msec); 0 zone resets 00:12:58.410 slat (usec): min=17, max=14345, avg=1976.17, stdev=3411.19 00:12:58.410 clat (msec): min=16, max=241, avg=127.65, stdev=12.07 00:12:58.410 lat (msec): min=16, max=241, avg=129.62, stdev=11.81 00:12:58.410 clat percentiles (msec): 00:12:58.410 | 1.00th=[ 75], 5.00th=[ 121], 10.00th=[ 123], 20.00th=[ 124], 00:12:58.410 | 30.00th=[ 127], 40.00th=[ 129], 50.00th=[ 130], 60.00th=[ 131], 00:12:58.410 | 70.00th=[ 131], 80.00th=[ 132], 90.00th=[ 133], 95.00th=[ 133], 00:12:58.410 | 99.00th=[ 148], 99.50th=[ 194], 99.90th=[ 234], 99.95th=[ 234], 00:12:58.410 | 99.99th=[ 243] 00:12:58.410 bw ( KiB/s): min=122880, max=137216, per=8.54%, avg=126259.20, stdev=3188.36, samples=20 00:12:58.410 iops : min= 480, max= 536, avg=493.20, stdev=12.45, samples=20 00:12:58.410 lat (msec) : 20=0.10%, 50=0.30%, 100=1.66%, 250=97.94% 00:12:58.410 cpu : usr=0.80%, sys=1.27%, ctx=5909, majf=0, minf=1 00:12:58.410 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:12:58.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:58.410 issued rwts: total=0,4995,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.410 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:58.410 job9: (groupid=0, jobs=1): err= 0: pid=78687: Sun Dec 15 05:53:18 2024 00:12:58.410 write: IOPS=407, BW=102MiB/s (107MB/s)(1033MiB/10139msec); 0 zone resets 00:12:58.410 slat (usec): min=17, max=88758, avg=2414.52, stdev=4341.16 00:12:58.410 clat (msec): min=91, max=305, avg=154.63, stdev=18.48 00:12:58.410 lat (msec): min=91, max=305, avg=157.05, stdev=18.26 00:12:58.410 clat percentiles (msec): 00:12:58.410 | 1.00th=[ 118], 5.00th=[ 123], 10.00th=[ 127], 20.00th=[ 133], 00:12:58.410 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 163], 60.00th=[ 163], 00:12:58.410 | 70.00th=[ 163], 80.00th=[ 165], 90.00th=[ 165], 95.00th=[ 167], 00:12:58.410 | 99.00th=[ 197], 99.50th=[ 251], 99.90th=[ 296], 99.95th=[ 296], 00:12:58.410 | 99.99th=[ 305] 00:12:58.410 bw ( KiB/s): min=98304, max=131584, per=7.04%, avg=104115.20, stdev=9298.38, samples=20 00:12:58.410 iops : min= 384, max= 514, avg=406.70, stdev=36.32, samples=20 00:12:58.410 lat (msec) : 100=0.19%, 250=99.37%, 500=0.44% 00:12:58.410 cpu : usr=0.89%, sys=1.23%, ctx=5328, majf=0, minf=1 00:12:58.410 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:12:58.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:58.410 issued rwts: total=0,4130,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.410 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:58.410 job10: (groupid=0, jobs=1): err= 0: pid=78688: Sun Dec 15 05:53:18 2024 00:12:58.410 write: IOPS=681, BW=170MiB/s (179MB/s)(1718MiB/10081msec); 0 zone resets 00:12:58.410 slat (usec): min=17, max=14408, avg=1450.05, stdev=2456.50 00:12:58.410 clat (msec): min=13, max=164, avg=92.41, stdev= 6.33 00:12:58.410 lat (msec): min=13, max=169, avg=93.86, stdev= 5.94 00:12:58.411 clat percentiles (msec): 00:12:58.411 | 1.00th=[ 85], 5.00th=[ 87], 10.00th=[ 88], 20.00th=[ 89], 00:12:58.411 | 30.00th=[ 92], 40.00th=[ 93], 50.00th=[ 93], 60.00th=[ 94], 00:12:58.411 | 70.00th=[ 94], 80.00th=[ 95], 90.00th=[ 95], 95.00th=[ 96], 00:12:58.411 | 99.00th=[ 109], 99.50th=[ 121], 99.90th=[ 159], 99.95th=[ 165], 00:12:58.411 | 99.99th=[ 165] 00:12:58.411 bw ( KiB/s): min=164168, max=178176, per=11.79%, avg=174326.80, stdev=2903.77, samples=20 00:12:58.411 iops : min= 641, max= 696, avg=680.95, stdev=11.39, samples=20 00:12:58.411 lat (msec) : 20=0.06%, 50=0.23%, 100=98.04%, 250=1.67% 00:12:58.411 cpu : usr=1.10%, sys=1.67%, ctx=9314, majf=0, minf=1 00:12:58.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:58.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:58.411 issued rwts: total=0,6872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.411 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:58.411 00:12:58.411 Run status group 0 (all jobs): 00:12:58.411 WRITE: bw=1444MiB/s (1514MB/s), 102MiB/s-170MiB/s (107MB/s-179MB/s), io=14.3GiB (15.4GB), run=10081-10149msec 00:12:58.411 00:12:58.411 Disk stats (read/write): 00:12:58.411 nvme0n1: ios=49/13595, merge=0/0, ticks=38/1214701, in_queue=1214739, util=97.84% 00:12:58.411 nvme10n1: ios=49/8159, merge=0/0, ticks=52/1209003, in_queue=1209055, util=97.94% 00:12:58.411 nvme1n1: ios=43/8309, merge=0/0, ticks=36/1210553, in_queue=1210589, util=98.12% 00:12:58.411 nvme2n1: ios=23/9752, merge=0/0, ticks=39/1211592, in_queue=1211631, util=97.97% 00:12:58.411 nvme3n1: ios=5/13030, merge=0/0, ticks=15/1213430, in_queue=1213445, util=97.99% 00:12:58.411 nvme4n1: ios=0/13084, merge=0/0, ticks=0/1214460, in_queue=1214460, util=98.31% 00:12:58.411 nvme5n1: ios=0/9830, merge=0/0, ticks=0/1211298, in_queue=1211298, util=98.39% 00:12:58.411 nvme6n1: ios=0/8206, merge=0/0, ticks=0/1209652, in_queue=1209652, util=98.47% 00:12:58.411 nvme7n1: ios=0/9845, merge=0/0, ticks=0/1213324, in_queue=1213324, util=98.84% 00:12:58.411 nvme8n1: ios=0/8109, merge=0/0, ticks=0/1209879, in_queue=1209879, util=98.81% 00:12:58.411 nvme9n1: ios=0/13569, merge=0/0, ticks=0/1213104, in_queue=1213104, util=98.90% 00:12:58.411 05:53:18 -- target/multiconnection.sh@36 -- # sync 00:12:58.411 05:53:18 -- target/multiconnection.sh@37 -- # seq 1 11 00:12:58.411 05:53:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:58.411 05:53:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.411 05:53:19 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:12:58.411 05:53:19 -- common/autotest_common.sh@1208 -- # local i=0 00:12:58.411 05:53:19 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:12:58.411 05:53:19 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:58.411 05:53:19 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:58.411 05:53:19 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:12:58.411 05:53:19 -- common/autotest_common.sh@1220 -- # return 0 00:12:58.411 05:53:19 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.411 05:53:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.411 05:53:19 -- common/autotest_common.sh@10 -- # set +x 00:12:58.411 05:53:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.411 05:53:19 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:58.411 05:53:19 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:12:58.411 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:12:58.411 05:53:19 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:12:58.411 05:53:19 -- common/autotest_common.sh@1208 -- # local i=0 00:12:58.411 05:53:19 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:12:58.411 05:53:19 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:58.411 05:53:19 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:58.411 05:53:19 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:12:58.411 05:53:19 -- common/autotest_common.sh@1220 -- # return 0 00:12:58.411 05:53:19 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:58.411 05:53:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.411 05:53:19 -- common/autotest_common.sh@10 -- # set +x 00:12:58.411 05:53:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.411 05:53:19 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:58.411 05:53:19 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:12:58.411 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:12:58.411 05:53:19 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:12:58.411 05:53:19 -- common/autotest_common.sh@1208 -- # local i=0 00:12:58.411 05:53:19 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:12:58.411 05:53:19 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:58.411 05:53:19 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:12:58.411 05:53:19 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:58.411 05:53:19 -- common/autotest_common.sh@1220 -- # return 0 00:12:58.411 05:53:19 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:58.411 05:53:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.411 05:53:19 -- common/autotest_common.sh@10 -- # set +x 00:12:58.411 05:53:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.411 05:53:19 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:58.411 05:53:19 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:12:58.411 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:12:58.411 05:53:19 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:12:58.411 05:53:19 -- common/autotest_common.sh@1208 -- # local i=0 00:12:58.411 05:53:19 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:58.411 05:53:19 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:12:58.411 05:53:19 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:58.411 05:53:19 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:12:58.411 05:53:19 -- common/autotest_common.sh@1220 -- # return 0 00:12:58.411 05:53:19 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:58.411 05:53:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.411 05:53:19 -- common/autotest_common.sh@10 -- # set +x 00:12:58.411 05:53:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.411 05:53:19 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:58.411 05:53:19 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:12:58.411 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:12:58.411 05:53:19 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:12:58.411 05:53:19 -- common/autotest_common.sh@1208 -- # local i=0 00:12:58.411 05:53:19 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:58.411 05:53:19 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:12:58.411 05:53:19 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:58.411 05:53:19 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:12:58.411 05:53:19 -- common/autotest_common.sh@1220 -- # return 0 00:12:58.411 05:53:19 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:12:58.411 05:53:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.411 05:53:19 -- common/autotest_common.sh@10 -- # set +x 00:12:58.411 05:53:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.412 05:53:19 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:58.412 05:53:19 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:12:58.412 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:12:58.412 05:53:19 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:12:58.412 05:53:19 -- common/autotest_common.sh@1208 -- # local i=0 00:12:58.412 05:53:19 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:58.412 05:53:19 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:12:58.412 05:53:19 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:58.412 05:53:19 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:12:58.412 05:53:19 -- common/autotest_common.sh@1220 -- # return 0 00:12:58.412 05:53:19 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:12:58.412 05:53:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.412 05:53:19 -- common/autotest_common.sh@10 -- # set +x 00:12:58.412 05:53:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.412 05:53:19 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:58.412 05:53:19 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:12:58.412 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:12:58.412 05:53:19 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:12:58.412 05:53:19 -- common/autotest_common.sh@1208 -- # local i=0 00:12:58.412 05:53:19 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:12:58.412 05:53:19 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:58.412 05:53:19 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:58.412 05:53:19 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:12:58.412 05:53:19 -- common/autotest_common.sh@1220 -- # return 0 00:12:58.412 05:53:19 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:12:58.412 05:53:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.412 05:53:19 -- common/autotest_common.sh@10 -- # set +x 00:12:58.412 05:53:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.412 05:53:19 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:58.412 05:53:19 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:12:58.412 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:12:58.412 05:53:19 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:12:58.412 05:53:19 -- common/autotest_common.sh@1208 -- # local i=0 00:12:58.412 05:53:19 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:12:58.412 05:53:19 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:58.412 05:53:19 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:12:58.412 05:53:19 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:58.412 05:53:19 -- common/autotest_common.sh@1220 -- # return 0 00:12:58.412 05:53:19 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:12:58.412 05:53:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.412 05:53:19 -- common/autotest_common.sh@10 -- # set +x 00:12:58.412 05:53:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.412 05:53:19 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:58.412 05:53:19 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:12:58.412 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:12:58.412 05:53:19 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:12:58.412 05:53:19 -- common/autotest_common.sh@1208 -- # local i=0 00:12:58.412 05:53:19 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:12:58.412 05:53:19 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:58.412 05:53:19 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:58.412 05:53:19 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:12:58.412 05:53:19 -- common/autotest_common.sh@1220 -- # return 0 00:12:58.412 05:53:19 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:12:58.412 05:53:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.412 05:53:19 -- common/autotest_common.sh@10 -- # set +x 00:12:58.412 05:53:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.412 05:53:19 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:58.412 05:53:19 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:12:58.412 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:12:58.412 05:53:19 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:12:58.412 05:53:19 -- common/autotest_common.sh@1208 -- # local i=0 00:12:58.412 05:53:19 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:58.412 05:53:19 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:12:58.412 05:53:19 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:58.412 05:53:19 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:12:58.412 05:53:19 -- common/autotest_common.sh@1220 -- # return 0 00:12:58.412 05:53:19 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:12:58.412 05:53:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.412 05:53:19 -- common/autotest_common.sh@10 -- # set +x 00:12:58.412 05:53:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.412 05:53:19 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:58.412 05:53:19 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:12:58.412 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:12:58.412 05:53:19 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:12:58.412 05:53:19 -- common/autotest_common.sh@1208 -- # local i=0 00:12:58.412 05:53:19 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:12:58.412 05:53:19 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:58.412 05:53:19 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:58.412 05:53:19 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:12:58.412 05:53:19 -- common/autotest_common.sh@1220 -- # return 0 00:12:58.412 05:53:19 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:12:58.412 05:53:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.412 05:53:19 -- common/autotest_common.sh@10 -- # set +x 00:12:58.412 05:53:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.412 05:53:19 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:12:58.412 05:53:19 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:12:58.412 05:53:19 -- target/multiconnection.sh@47 -- # nvmftestfini 00:12:58.412 05:53:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:58.412 05:53:19 -- nvmf/common.sh@116 -- # sync 00:12:58.412 05:53:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:58.412 05:53:19 -- nvmf/common.sh@119 -- # set +e 00:12:58.412 05:53:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:58.412 05:53:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:58.412 rmmod nvme_tcp 00:12:58.412 rmmod nvme_fabrics 00:12:58.412 rmmod nvme_keyring 00:12:58.412 05:53:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:58.412 05:53:20 -- nvmf/common.sh@123 -- # set -e 00:12:58.412 05:53:20 -- nvmf/common.sh@124 -- # return 0 00:12:58.412 05:53:20 -- nvmf/common.sh@477 -- # '[' -n 78003 ']' 00:12:58.412 05:53:20 -- nvmf/common.sh@478 -- # killprocess 78003 00:12:58.412 05:53:20 -- common/autotest_common.sh@936 -- # '[' -z 78003 ']' 00:12:58.412 05:53:20 -- common/autotest_common.sh@940 -- # kill -0 78003 00:12:58.412 05:53:20 -- common/autotest_common.sh@941 -- # uname 00:12:58.413 05:53:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:58.413 05:53:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78003 00:12:58.671 killing process with pid 78003 00:12:58.671 05:53:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:58.671 05:53:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:58.671 05:53:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78003' 00:12:58.671 05:53:20 -- common/autotest_common.sh@955 -- # kill 78003 00:12:58.671 05:53:20 -- common/autotest_common.sh@960 -- # wait 78003 00:12:58.930 05:53:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:58.930 05:53:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:58.930 05:53:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:58.930 05:53:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:58.930 05:53:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:58.930 05:53:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.930 05:53:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.930 05:53:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.930 05:53:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:58.930 00:12:58.930 real 0m48.201s 00:12:58.930 user 2m34.462s 00:12:58.930 sys 0m37.529s 00:12:58.930 ************************************ 00:12:58.930 END TEST nvmf_multiconnection 00:12:58.930 ************************************ 00:12:58.930 05:53:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:58.930 05:53:20 -- common/autotest_common.sh@10 -- # set +x 00:12:58.930 05:53:20 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:12:58.930 05:53:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:58.930 05:53:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:58.930 05:53:20 -- common/autotest_common.sh@10 -- # set +x 00:12:58.930 ************************************ 00:12:58.930 START TEST nvmf_initiator_timeout 00:12:58.930 ************************************ 00:12:58.930 05:53:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:12:58.930 * Looking for test storage... 00:12:58.930 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:58.930 05:53:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:58.930 05:53:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:58.930 05:53:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:59.189 05:53:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:59.189 05:53:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:59.189 05:53:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:59.189 05:53:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:59.189 05:53:20 -- scripts/common.sh@335 -- # IFS=.-: 00:12:59.189 05:53:20 -- scripts/common.sh@335 -- # read -ra ver1 00:12:59.189 05:53:20 -- scripts/common.sh@336 -- # IFS=.-: 00:12:59.189 05:53:20 -- scripts/common.sh@336 -- # read -ra ver2 00:12:59.189 05:53:20 -- scripts/common.sh@337 -- # local 'op=<' 00:12:59.189 05:53:20 -- scripts/common.sh@339 -- # ver1_l=2 00:12:59.189 05:53:20 -- scripts/common.sh@340 -- # ver2_l=1 00:12:59.189 05:53:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:59.189 05:53:20 -- scripts/common.sh@343 -- # case "$op" in 00:12:59.189 05:53:20 -- scripts/common.sh@344 -- # : 1 00:12:59.189 05:53:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:59.189 05:53:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:59.189 05:53:20 -- scripts/common.sh@364 -- # decimal 1 00:12:59.189 05:53:20 -- scripts/common.sh@352 -- # local d=1 00:12:59.189 05:53:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:59.189 05:53:20 -- scripts/common.sh@354 -- # echo 1 00:12:59.189 05:53:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:59.189 05:53:20 -- scripts/common.sh@365 -- # decimal 2 00:12:59.189 05:53:20 -- scripts/common.sh@352 -- # local d=2 00:12:59.189 05:53:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:59.190 05:53:20 -- scripts/common.sh@354 -- # echo 2 00:12:59.190 05:53:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:59.190 05:53:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:59.190 05:53:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:59.190 05:53:20 -- scripts/common.sh@367 -- # return 0 00:12:59.190 05:53:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:59.190 05:53:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:59.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.190 --rc genhtml_branch_coverage=1 00:12:59.190 --rc genhtml_function_coverage=1 00:12:59.190 --rc genhtml_legend=1 00:12:59.190 --rc geninfo_all_blocks=1 00:12:59.190 --rc geninfo_unexecuted_blocks=1 00:12:59.190 00:12:59.190 ' 00:12:59.190 05:53:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:59.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.190 --rc genhtml_branch_coverage=1 00:12:59.190 --rc genhtml_function_coverage=1 00:12:59.190 --rc genhtml_legend=1 00:12:59.190 --rc geninfo_all_blocks=1 00:12:59.190 --rc geninfo_unexecuted_blocks=1 00:12:59.190 00:12:59.190 ' 00:12:59.190 05:53:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:59.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.190 --rc genhtml_branch_coverage=1 00:12:59.190 --rc genhtml_function_coverage=1 00:12:59.190 --rc genhtml_legend=1 00:12:59.190 --rc geninfo_all_blocks=1 00:12:59.190 --rc geninfo_unexecuted_blocks=1 00:12:59.190 00:12:59.190 ' 00:12:59.190 05:53:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:59.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.190 --rc genhtml_branch_coverage=1 00:12:59.190 --rc genhtml_function_coverage=1 00:12:59.190 --rc genhtml_legend=1 00:12:59.190 --rc geninfo_all_blocks=1 00:12:59.190 --rc geninfo_unexecuted_blocks=1 00:12:59.190 00:12:59.190 ' 00:12:59.190 05:53:20 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:59.190 05:53:20 -- nvmf/common.sh@7 -- # uname -s 00:12:59.190 05:53:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.190 05:53:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.190 05:53:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.190 05:53:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.190 05:53:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.190 05:53:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.190 05:53:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.190 05:53:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.190 05:53:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.190 05:53:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.190 05:53:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:12:59.190 05:53:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:12:59.190 05:53:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.190 05:53:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.190 05:53:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:59.190 05:53:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:59.190 05:53:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.190 05:53:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.190 05:53:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.190 05:53:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.190 05:53:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.190 05:53:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.190 05:53:20 -- paths/export.sh@5 -- # export PATH 00:12:59.190 05:53:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.190 05:53:20 -- nvmf/common.sh@46 -- # : 0 00:12:59.190 05:53:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:59.190 05:53:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:59.190 05:53:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:59.190 05:53:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.190 05:53:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.190 05:53:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:59.190 05:53:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:59.190 05:53:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:59.190 05:53:20 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:59.190 05:53:20 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:59.190 05:53:20 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:12:59.190 05:53:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:59.190 05:53:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.190 05:53:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:59.190 05:53:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:59.190 05:53:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:59.190 05:53:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.190 05:53:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:59.190 05:53:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.190 05:53:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:59.190 05:53:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:59.190 05:53:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:59.190 05:53:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:59.190 05:53:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:59.190 05:53:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:59.190 05:53:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:59.190 05:53:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:59.190 05:53:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:59.190 05:53:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:59.190 05:53:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:59.190 05:53:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:59.190 05:53:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:59.190 05:53:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:59.190 05:53:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:59.190 05:53:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:59.190 05:53:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:59.190 05:53:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:59.190 05:53:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:59.190 05:53:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:59.190 Cannot find device "nvmf_tgt_br" 00:12:59.190 05:53:20 -- nvmf/common.sh@154 -- # true 00:12:59.190 05:53:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:59.190 Cannot find device "nvmf_tgt_br2" 00:12:59.190 05:53:20 -- nvmf/common.sh@155 -- # true 00:12:59.190 05:53:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:59.190 05:53:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:59.190 Cannot find device "nvmf_tgt_br" 00:12:59.190 05:53:20 -- nvmf/common.sh@157 -- # true 00:12:59.190 05:53:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:59.190 Cannot find device "nvmf_tgt_br2" 00:12:59.190 05:53:20 -- nvmf/common.sh@158 -- # true 00:12:59.190 05:53:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:59.190 05:53:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:59.190 05:53:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:59.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:59.190 05:53:20 -- nvmf/common.sh@161 -- # true 00:12:59.190 05:53:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:59.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:59.190 05:53:20 -- nvmf/common.sh@162 -- # true 00:12:59.190 05:53:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:59.190 05:53:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:59.190 05:53:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:59.190 05:53:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:59.190 05:53:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:59.449 05:53:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:59.450 05:53:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:59.450 05:53:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:59.450 05:53:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:59.450 05:53:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:59.450 05:53:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:59.450 05:53:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:59.450 05:53:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:59.450 05:53:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:59.450 05:53:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:59.450 05:53:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:59.450 05:53:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:59.450 05:53:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:59.450 05:53:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:59.450 05:53:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:59.450 05:53:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:59.450 05:53:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:59.450 05:53:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:59.450 05:53:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:59.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:59.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:12:59.450 00:12:59.450 --- 10.0.0.2 ping statistics --- 00:12:59.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.450 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:12:59.450 05:53:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:59.450 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:59.450 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:12:59.450 00:12:59.450 --- 10.0.0.3 ping statistics --- 00:12:59.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.450 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:12:59.450 05:53:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:59.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:59.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:12:59.450 00:12:59.450 --- 10.0.0.1 ping statistics --- 00:12:59.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.450 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:12:59.450 05:53:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:59.450 05:53:20 -- nvmf/common.sh@421 -- # return 0 00:12:59.450 05:53:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:59.450 05:53:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:59.450 05:53:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:59.450 05:53:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:59.450 05:53:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:59.450 05:53:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:59.450 05:53:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:59.450 05:53:21 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:12:59.450 05:53:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:59.450 05:53:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:59.450 05:53:21 -- common/autotest_common.sh@10 -- # set +x 00:12:59.450 05:53:21 -- nvmf/common.sh@469 -- # nvmfpid=79067 00:12:59.450 05:53:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:59.450 05:53:21 -- nvmf/common.sh@470 -- # waitforlisten 79067 00:12:59.450 05:53:21 -- common/autotest_common.sh@829 -- # '[' -z 79067 ']' 00:12:59.450 05:53:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.450 05:53:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:59.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.450 05:53:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.450 05:53:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:59.450 05:53:21 -- common/autotest_common.sh@10 -- # set +x 00:12:59.450 [2024-12-15 05:53:21.059456] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:59.450 [2024-12-15 05:53:21.059572] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.709 [2024-12-15 05:53:21.200139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:59.709 [2024-12-15 05:53:21.240363] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:59.709 [2024-12-15 05:53:21.240816] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.709 [2024-12-15 05:53:21.240993] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.709 [2024-12-15 05:53:21.241154] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.709 [2024-12-15 05:53:21.241434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.709 [2024-12-15 05:53:21.241714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.709 [2024-12-15 05:53:21.241716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.709 [2024-12-15 05:53:21.241587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.646 05:53:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:00.646 05:53:22 -- common/autotest_common.sh@862 -- # return 0 00:13:00.646 05:53:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:00.646 05:53:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:00.646 05:53:22 -- common/autotest_common.sh@10 -- # set +x 00:13:00.646 05:53:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.646 05:53:22 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:00.646 05:53:22 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:00.646 05:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.646 05:53:22 -- common/autotest_common.sh@10 -- # set +x 00:13:00.646 Malloc0 00:13:00.646 05:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.646 05:53:22 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:13:00.646 05:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.646 05:53:22 -- common/autotest_common.sh@10 -- # set +x 00:13:00.646 Delay0 00:13:00.646 05:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.646 05:53:22 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:00.646 05:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.646 05:53:22 -- common/autotest_common.sh@10 -- # set +x 00:13:00.646 [2024-12-15 05:53:22.170938] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:00.646 05:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.646 05:53:22 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:00.646 05:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.646 05:53:22 -- common/autotest_common.sh@10 -- # set +x 00:13:00.646 05:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.646 05:53:22 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:00.646 05:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.646 05:53:22 -- common/autotest_common.sh@10 -- # set +x 00:13:00.646 05:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.646 05:53:22 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.646 05:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.646 05:53:22 -- common/autotest_common.sh@10 -- # set +x 00:13:00.646 [2024-12-15 05:53:22.199084] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.646 05:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.646 05:53:22 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 --hostid=926ec8f8-6baf-4857-8f2f-72d8639146f9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:00.905 05:53:22 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:13:00.905 05:53:22 -- common/autotest_common.sh@1187 -- # local i=0 00:13:00.905 05:53:22 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:00.905 05:53:22 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:00.905 05:53:22 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:02.808 05:53:24 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:02.808 05:53:24 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:02.808 05:53:24 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:13:02.808 05:53:24 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:02.808 05:53:24 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:02.808 05:53:24 -- common/autotest_common.sh@1197 -- # return 0 00:13:02.808 05:53:24 -- target/initiator_timeout.sh@35 -- # fio_pid=79131 00:13:02.808 05:53:24 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:13:02.808 05:53:24 -- target/initiator_timeout.sh@37 -- # sleep 3 00:13:02.808 [global] 00:13:02.808 thread=1 00:13:02.808 invalidate=1 00:13:02.808 rw=write 00:13:02.808 time_based=1 00:13:02.808 runtime=60 00:13:02.808 ioengine=libaio 00:13:02.808 direct=1 00:13:02.808 bs=4096 00:13:02.808 iodepth=1 00:13:02.808 norandommap=0 00:13:02.808 numjobs=1 00:13:02.808 00:13:02.808 verify_dump=1 00:13:02.808 verify_backlog=512 00:13:02.808 verify_state_save=0 00:13:02.808 do_verify=1 00:13:02.808 verify=crc32c-intel 00:13:02.808 [job0] 00:13:02.808 filename=/dev/nvme0n1 00:13:02.808 Could not set queue depth (nvme0n1) 00:13:03.067 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:03.067 fio-3.35 00:13:03.067 Starting 1 thread 00:13:06.354 05:53:27 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:13:06.354 05:53:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.354 05:53:27 -- common/autotest_common.sh@10 -- # set +x 00:13:06.354 true 00:13:06.354 05:53:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.354 05:53:27 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:13:06.354 05:53:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.354 05:53:27 -- common/autotest_common.sh@10 -- # set +x 00:13:06.354 true 00:13:06.354 05:53:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.354 05:53:27 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:13:06.354 05:53:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.354 05:53:27 -- common/autotest_common.sh@10 -- # set +x 00:13:06.354 true 00:13:06.354 05:53:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.354 05:53:27 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:13:06.354 05:53:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.354 05:53:27 -- common/autotest_common.sh@10 -- # set +x 00:13:06.354 true 00:13:06.354 05:53:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.354 05:53:27 -- target/initiator_timeout.sh@45 -- # sleep 3 00:13:08.888 05:53:30 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:13:08.888 05:53:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.888 05:53:30 -- common/autotest_common.sh@10 -- # set +x 00:13:08.888 true 00:13:08.888 05:53:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.888 05:53:30 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:13:08.888 05:53:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.888 05:53:30 -- common/autotest_common.sh@10 -- # set +x 00:13:08.888 true 00:13:08.888 05:53:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.888 05:53:30 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:13:08.888 05:53:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.888 05:53:30 -- common/autotest_common.sh@10 -- # set +x 00:13:08.888 true 00:13:08.888 05:53:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.888 05:53:30 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:13:08.888 05:53:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.888 05:53:30 -- common/autotest_common.sh@10 -- # set +x 00:13:08.888 true 00:13:08.888 05:53:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.888 05:53:30 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:13:08.888 05:53:30 -- target/initiator_timeout.sh@54 -- # wait 79131 00:14:05.115 00:14:05.115 job0: (groupid=0, jobs=1): err= 0: pid=79152: Sun Dec 15 05:54:24 2024 00:14:05.115 read: IOPS=776, BW=3106KiB/s (3181kB/s)(182MiB/60000msec) 00:14:05.115 slat (usec): min=12, max=18746, avg=16.72, stdev=92.39 00:14:05.115 clat (usec): min=107, max=40742k, avg=1080.96, stdev=188751.05 00:14:05.115 lat (usec): min=167, max=40742k, avg=1097.68, stdev=188751.07 00:14:05.115 clat percentiles (usec): 00:14:05.115 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 188], 00:14:05.115 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 210], 00:14:05.115 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 237], 95.00th=[ 245], 00:14:05.115 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 355], 99.95th=[ 408], 00:14:05.115 | 99.99th=[ 799] 00:14:05.115 write: IOPS=783, BW=3135KiB/s (3210kB/s)(184MiB/60000msec); 0 zone resets 00:14:05.115 slat (usec): min=14, max=506, avg=23.62, stdev= 8.86 00:14:05.115 clat (usec): min=2, max=1834, avg=161.10, stdev=23.53 00:14:05.115 lat (usec): min=135, max=2319, avg=184.72, stdev=25.97 00:14:05.115 clat percentiles (usec): 00:14:05.115 | 1.00th=[ 127], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 145], 00:14:05.115 | 30.00th=[ 149], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 163], 00:14:05.115 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 186], 95.00th=[ 196], 00:14:05.115 | 99.00th=[ 217], 99.50th=[ 231], 99.90th=[ 343], 99.95th=[ 449], 00:14:05.115 | 99.99th=[ 685] 00:14:05.115 bw ( KiB/s): min= 4096, max=11848, per=100.00%, avg=9655.34, stdev=1480.13, samples=38 00:14:05.115 iops : min= 1024, max= 2962, avg=2413.82, stdev=370.05, samples=38 00:14:05.115 lat (usec) : 4=0.01%, 10=0.01%, 50=0.01%, 100=0.01%, 250=98.10% 00:14:05.115 lat (usec) : 500=1.86%, 750=0.03%, 1000=0.01% 00:14:05.115 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:14:05.115 cpu : usr=0.62%, sys=2.39%, ctx=93633, majf=0, minf=5 00:14:05.115 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:05.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.115 issued rwts: total=46592,47023,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.115 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:05.115 00:14:05.115 Run status group 0 (all jobs): 00:14:05.115 READ: bw=3106KiB/s (3181kB/s), 3106KiB/s-3106KiB/s (3181kB/s-3181kB/s), io=182MiB (191MB), run=60000-60000msec 00:14:05.115 WRITE: bw=3135KiB/s (3210kB/s), 3135KiB/s-3135KiB/s (3210kB/s-3210kB/s), io=184MiB (193MB), run=60000-60000msec 00:14:05.115 00:14:05.115 Disk stats (read/write): 00:14:05.115 nvme0n1: ios=46788/46592, merge=0/0, ticks=10307/8340, in_queue=18647, util=99.77% 00:14:05.115 05:54:24 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:05.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.115 05:54:24 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:05.115 05:54:24 -- common/autotest_common.sh@1208 -- # local i=0 00:14:05.115 05:54:24 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:05.115 05:54:24 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:05.115 05:54:24 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:05.115 05:54:24 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:05.115 nvmf hotplug test: fio successful as expected 00:14:05.115 05:54:24 -- common/autotest_common.sh@1220 -- # return 0 00:14:05.115 05:54:24 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:14:05.115 05:54:24 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:14:05.115 05:54:24 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:05.115 05:54:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.115 05:54:24 -- common/autotest_common.sh@10 -- # set +x 00:14:05.115 05:54:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.115 05:54:24 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:14:05.115 05:54:24 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:14:05.115 05:54:24 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:14:05.115 05:54:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:05.115 05:54:24 -- nvmf/common.sh@116 -- # sync 00:14:05.115 05:54:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:05.115 05:54:24 -- nvmf/common.sh@119 -- # set +e 00:14:05.115 05:54:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:05.115 05:54:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:05.115 rmmod nvme_tcp 00:14:05.115 rmmod nvme_fabrics 00:14:05.115 rmmod nvme_keyring 00:14:05.115 05:54:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:05.115 05:54:24 -- nvmf/common.sh@123 -- # set -e 00:14:05.115 05:54:24 -- nvmf/common.sh@124 -- # return 0 00:14:05.115 05:54:24 -- nvmf/common.sh@477 -- # '[' -n 79067 ']' 00:14:05.115 05:54:24 -- nvmf/common.sh@478 -- # killprocess 79067 00:14:05.115 05:54:24 -- common/autotest_common.sh@936 -- # '[' -z 79067 ']' 00:14:05.115 05:54:24 -- common/autotest_common.sh@940 -- # kill -0 79067 00:14:05.115 05:54:24 -- common/autotest_common.sh@941 -- # uname 00:14:05.115 05:54:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:05.115 05:54:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79067 00:14:05.115 killing process with pid 79067 00:14:05.115 05:54:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:05.115 05:54:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:05.115 05:54:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79067' 00:14:05.115 05:54:24 -- common/autotest_common.sh@955 -- # kill 79067 00:14:05.115 05:54:24 -- common/autotest_common.sh@960 -- # wait 79067 00:14:05.115 05:54:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:05.115 05:54:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:05.115 05:54:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:05.115 05:54:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:05.115 05:54:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:05.116 05:54:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.116 05:54:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.116 05:54:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.116 05:54:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:05.116 00:14:05.116 real 1m4.571s 00:14:05.116 user 3m53.785s 00:14:05.116 sys 0m21.952s 00:14:05.116 05:54:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:05.116 05:54:25 -- common/autotest_common.sh@10 -- # set +x 00:14:05.116 ************************************ 00:14:05.116 END TEST nvmf_initiator_timeout 00:14:05.116 ************************************ 00:14:05.116 05:54:25 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:14:05.116 05:54:25 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:14:05.116 05:54:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:05.116 05:54:25 -- common/autotest_common.sh@10 -- # set +x 00:14:05.116 05:54:25 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:14:05.116 05:54:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:05.116 05:54:25 -- common/autotest_common.sh@10 -- # set +x 00:14:05.116 05:54:25 -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:14:05.116 05:54:25 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:05.116 05:54:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:05.116 05:54:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:05.116 05:54:25 -- common/autotest_common.sh@10 -- # set +x 00:14:05.116 ************************************ 00:14:05.116 START TEST nvmf_identify 00:14:05.116 ************************************ 00:14:05.116 05:54:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:05.116 * Looking for test storage... 00:14:05.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:05.116 05:54:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:05.116 05:54:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:05.116 05:54:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:05.116 05:54:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:05.116 05:54:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:05.116 05:54:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:05.116 05:54:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:05.116 05:54:25 -- scripts/common.sh@335 -- # IFS=.-: 00:14:05.116 05:54:25 -- scripts/common.sh@335 -- # read -ra ver1 00:14:05.116 05:54:25 -- scripts/common.sh@336 -- # IFS=.-: 00:14:05.116 05:54:25 -- scripts/common.sh@336 -- # read -ra ver2 00:14:05.116 05:54:25 -- scripts/common.sh@337 -- # local 'op=<' 00:14:05.116 05:54:25 -- scripts/common.sh@339 -- # ver1_l=2 00:14:05.116 05:54:25 -- scripts/common.sh@340 -- # ver2_l=1 00:14:05.116 05:54:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:05.116 05:54:25 -- scripts/common.sh@343 -- # case "$op" in 00:14:05.116 05:54:25 -- scripts/common.sh@344 -- # : 1 00:14:05.116 05:54:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:05.116 05:54:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:05.116 05:54:25 -- scripts/common.sh@364 -- # decimal 1 00:14:05.116 05:54:25 -- scripts/common.sh@352 -- # local d=1 00:14:05.116 05:54:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:05.116 05:54:25 -- scripts/common.sh@354 -- # echo 1 00:14:05.116 05:54:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:05.116 05:54:25 -- scripts/common.sh@365 -- # decimal 2 00:14:05.116 05:54:25 -- scripts/common.sh@352 -- # local d=2 00:14:05.116 05:54:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:05.116 05:54:25 -- scripts/common.sh@354 -- # echo 2 00:14:05.116 05:54:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:05.116 05:54:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:05.116 05:54:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:05.116 05:54:25 -- scripts/common.sh@367 -- # return 0 00:14:05.116 05:54:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:05.116 05:54:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:05.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.116 --rc genhtml_branch_coverage=1 00:14:05.116 --rc genhtml_function_coverage=1 00:14:05.116 --rc genhtml_legend=1 00:14:05.116 --rc geninfo_all_blocks=1 00:14:05.116 --rc geninfo_unexecuted_blocks=1 00:14:05.116 00:14:05.116 ' 00:14:05.116 05:54:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:05.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.116 --rc genhtml_branch_coverage=1 00:14:05.116 --rc genhtml_function_coverage=1 00:14:05.116 --rc genhtml_legend=1 00:14:05.116 --rc geninfo_all_blocks=1 00:14:05.116 --rc geninfo_unexecuted_blocks=1 00:14:05.116 00:14:05.116 ' 00:14:05.116 05:54:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:05.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.116 --rc genhtml_branch_coverage=1 00:14:05.116 --rc genhtml_function_coverage=1 00:14:05.116 --rc genhtml_legend=1 00:14:05.116 --rc geninfo_all_blocks=1 00:14:05.116 --rc geninfo_unexecuted_blocks=1 00:14:05.116 00:14:05.116 ' 00:14:05.116 05:54:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:05.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.116 --rc genhtml_branch_coverage=1 00:14:05.116 --rc genhtml_function_coverage=1 00:14:05.116 --rc genhtml_legend=1 00:14:05.116 --rc geninfo_all_blocks=1 00:14:05.116 --rc geninfo_unexecuted_blocks=1 00:14:05.116 00:14:05.116 ' 00:14:05.116 05:54:25 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:05.116 05:54:25 -- nvmf/common.sh@7 -- # uname -s 00:14:05.116 05:54:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:05.116 05:54:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:05.116 05:54:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:05.116 05:54:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:05.116 05:54:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:05.116 05:54:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:05.116 05:54:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:05.116 05:54:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:05.116 05:54:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:05.116 05:54:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:05.116 05:54:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:14:05.116 05:54:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:14:05.116 05:54:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:05.116 05:54:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:05.116 05:54:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:05.116 05:54:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:05.116 05:54:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.116 05:54:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.116 05:54:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.116 05:54:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.116 05:54:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.116 05:54:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.116 05:54:25 -- paths/export.sh@5 -- # export PATH 00:14:05.116 05:54:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.116 05:54:25 -- nvmf/common.sh@46 -- # : 0 00:14:05.116 05:54:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:05.116 05:54:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:05.116 05:54:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:05.116 05:54:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:05.116 05:54:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:05.116 05:54:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:05.116 05:54:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:05.116 05:54:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:05.116 05:54:25 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:05.116 05:54:25 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:05.116 05:54:25 -- host/identify.sh@14 -- # nvmftestinit 00:14:05.116 05:54:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:05.117 05:54:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:05.117 05:54:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:05.117 05:54:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:05.117 05:54:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:05.117 05:54:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.117 05:54:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.117 05:54:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.117 05:54:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:05.117 05:54:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:05.117 05:54:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:05.117 05:54:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:05.117 05:54:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:05.117 05:54:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:05.117 05:54:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.117 05:54:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:05.117 05:54:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:05.117 05:54:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:05.117 05:54:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:05.117 05:54:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:05.117 05:54:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:05.117 05:54:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.117 05:54:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:05.117 05:54:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:05.117 05:54:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:05.117 05:54:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:05.117 05:54:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:05.117 05:54:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:05.117 Cannot find device "nvmf_tgt_br" 00:14:05.117 05:54:25 -- nvmf/common.sh@154 -- # true 00:14:05.117 05:54:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:05.117 Cannot find device "nvmf_tgt_br2" 00:14:05.117 05:54:25 -- nvmf/common.sh@155 -- # true 00:14:05.117 05:54:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:05.117 05:54:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:05.117 Cannot find device "nvmf_tgt_br" 00:14:05.117 05:54:25 -- nvmf/common.sh@157 -- # true 00:14:05.117 05:54:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:05.117 Cannot find device "nvmf_tgt_br2" 00:14:05.117 05:54:25 -- nvmf/common.sh@158 -- # true 00:14:05.117 05:54:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:05.117 05:54:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:05.117 05:54:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:05.117 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:05.117 05:54:25 -- nvmf/common.sh@161 -- # true 00:14:05.117 05:54:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:05.117 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:05.117 05:54:25 -- nvmf/common.sh@162 -- # true 00:14:05.117 05:54:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:05.117 05:54:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:05.117 05:54:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:05.117 05:54:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:05.117 05:54:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:05.117 05:54:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:05.117 05:54:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:05.117 05:54:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:05.117 05:54:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:05.117 05:54:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:05.117 05:54:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:05.117 05:54:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:05.117 05:54:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:05.117 05:54:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:05.117 05:54:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:05.117 05:54:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:05.117 05:54:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:05.117 05:54:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:05.117 05:54:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:05.117 05:54:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:05.117 05:54:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:05.117 05:54:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:05.117 05:54:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:05.117 05:54:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:05.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:05.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:14:05.117 00:14:05.117 --- 10.0.0.2 ping statistics --- 00:14:05.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.117 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:14:05.117 05:54:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:05.117 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:05.117 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:14:05.117 00:14:05.117 --- 10.0.0.3 ping statistics --- 00:14:05.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.117 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:14:05.117 05:54:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:05.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:05.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:14:05.117 00:14:05.117 --- 10.0.0.1 ping statistics --- 00:14:05.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.117 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:14:05.117 05:54:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:05.117 05:54:25 -- nvmf/common.sh@421 -- # return 0 00:14:05.117 05:54:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:05.117 05:54:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:05.117 05:54:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:05.117 05:54:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:05.117 05:54:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:05.117 05:54:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:05.117 05:54:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:05.117 05:54:25 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:05.117 05:54:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:05.117 05:54:25 -- common/autotest_common.sh@10 -- # set +x 00:14:05.117 05:54:25 -- host/identify.sh@19 -- # nvmfpid=80003 00:14:05.117 05:54:25 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:05.117 05:54:25 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:05.117 05:54:25 -- host/identify.sh@23 -- # waitforlisten 80003 00:14:05.117 05:54:25 -- common/autotest_common.sh@829 -- # '[' -z 80003 ']' 00:14:05.117 05:54:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.117 05:54:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:05.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.117 05:54:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.117 05:54:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:05.117 05:54:25 -- common/autotest_common.sh@10 -- # set +x 00:14:05.117 [2024-12-15 05:54:25.739299] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:05.117 [2024-12-15 05:54:25.739384] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.117 [2024-12-15 05:54:25.875064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:05.117 [2024-12-15 05:54:25.908644] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:05.117 [2024-12-15 05:54:25.908768] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.117 [2024-12-15 05:54:25.908780] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.117 [2024-12-15 05:54:25.908789] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.117 [2024-12-15 05:54:25.908954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.117 [2024-12-15 05:54:25.909598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.117 [2024-12-15 05:54:25.909728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:05.117 [2024-12-15 05:54:25.909862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.117 05:54:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:05.117 05:54:26 -- common/autotest_common.sh@862 -- # return 0 00:14:05.117 05:54:26 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:05.117 05:54:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.117 05:54:26 -- common/autotest_common.sh@10 -- # set +x 00:14:05.117 [2024-12-15 05:54:26.751116] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:05.376 05:54:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.376 05:54:26 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:05.376 05:54:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:05.376 05:54:26 -- common/autotest_common.sh@10 -- # set +x 00:14:05.376 05:54:26 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:05.376 05:54:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.376 05:54:26 -- common/autotest_common.sh@10 -- # set +x 00:14:05.376 Malloc0 00:14:05.376 05:54:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.376 05:54:26 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:05.376 05:54:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.376 05:54:26 -- common/autotest_common.sh@10 -- # set +x 00:14:05.376 05:54:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.376 05:54:26 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:05.376 05:54:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.376 05:54:26 -- common/autotest_common.sh@10 -- # set +x 00:14:05.376 05:54:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.376 05:54:26 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:05.376 05:54:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.376 05:54:26 -- common/autotest_common.sh@10 -- # set +x 00:14:05.376 [2024-12-15 05:54:26.844498] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:05.376 05:54:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.376 05:54:26 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:05.376 05:54:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.376 05:54:26 -- common/autotest_common.sh@10 -- # set +x 00:14:05.376 05:54:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.376 05:54:26 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:05.376 05:54:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.376 05:54:26 -- common/autotest_common.sh@10 -- # set +x 00:14:05.376 [2024-12-15 05:54:26.860271] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:14:05.376 [ 00:14:05.376 { 00:14:05.376 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:05.376 "subtype": "Discovery", 00:14:05.376 "listen_addresses": [ 00:14:05.376 { 00:14:05.376 "transport": "TCP", 00:14:05.376 "trtype": "TCP", 00:14:05.376 "adrfam": "IPv4", 00:14:05.376 "traddr": "10.0.0.2", 00:14:05.376 "trsvcid": "4420" 00:14:05.376 } 00:14:05.376 ], 00:14:05.376 "allow_any_host": true, 00:14:05.376 "hosts": [] 00:14:05.376 }, 00:14:05.376 { 00:14:05.376 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:05.376 "subtype": "NVMe", 00:14:05.376 "listen_addresses": [ 00:14:05.376 { 00:14:05.376 "transport": "TCP", 00:14:05.376 "trtype": "TCP", 00:14:05.376 "adrfam": "IPv4", 00:14:05.376 "traddr": "10.0.0.2", 00:14:05.376 "trsvcid": "4420" 00:14:05.376 } 00:14:05.376 ], 00:14:05.376 "allow_any_host": true, 00:14:05.376 "hosts": [], 00:14:05.376 "serial_number": "SPDK00000000000001", 00:14:05.376 "model_number": "SPDK bdev Controller", 00:14:05.376 "max_namespaces": 32, 00:14:05.376 "min_cntlid": 1, 00:14:05.376 "max_cntlid": 65519, 00:14:05.376 "namespaces": [ 00:14:05.376 { 00:14:05.376 "nsid": 1, 00:14:05.376 "bdev_name": "Malloc0", 00:14:05.376 "name": "Malloc0", 00:14:05.376 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:05.376 "eui64": "ABCDEF0123456789", 00:14:05.376 "uuid": "fde902f7-fa2d-4e78-816f-983d9fad6aa8" 00:14:05.376 } 00:14:05.376 ] 00:14:05.376 } 00:14:05.376 ] 00:14:05.376 05:54:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.376 05:54:26 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:05.376 [2024-12-15 05:54:26.893910] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:05.376 [2024-12-15 05:54:26.893964] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80039 ] 00:14:05.641 [2024-12-15 05:54:27.030585] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:14:05.641 [2024-12-15 05:54:27.030659] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:05.641 [2024-12-15 05:54:27.030667] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:05.641 [2024-12-15 05:54:27.030680] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:05.641 [2024-12-15 05:54:27.030691] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:14:05.641 [2024-12-15 05:54:27.030827] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:14:05.641 [2024-12-15 05:54:27.030913] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x13db540 0 00:14:05.641 [2024-12-15 05:54:27.043963] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:05.641 [2024-12-15 05:54:27.043985] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:05.641 [2024-12-15 05:54:27.044007] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:05.641 [2024-12-15 05:54:27.044011] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:05.641 [2024-12-15 05:54:27.044067] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.641 [2024-12-15 05:54:27.044074] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.642 [2024-12-15 05:54:27.044078] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13db540) 00:14:05.642 [2024-12-15 05:54:27.044092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:05.642 [2024-12-15 05:54:27.044120] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1414220, cid 0, qid 0 00:14:05.642 [2024-12-15 05:54:27.051963] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.642 [2024-12-15 05:54:27.051998] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.642 [2024-12-15 05:54:27.052003] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.642 [2024-12-15 05:54:27.052025] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1414220) on tqpair=0x13db540 00:14:05.642 [2024-12-15 05:54:27.052039] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:05.642 [2024-12-15 05:54:27.052046] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:14:05.642 [2024-12-15 05:54:27.052053] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:14:05.642 [2024-12-15 05:54:27.052081] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.642 [2024-12-15 05:54:27.052090] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.642 [2024-12-15 05:54:27.052094] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13db540) 00:14:05.642 [2024-12-15 05:54:27.052103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.642 [2024-12-15 05:54:27.052131] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1414220, cid 0, qid 0 00:14:05.642 [2024-12-15 05:54:27.052204] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.642 [2024-12-15 05:54:27.052211] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.642 [2024-12-15 05:54:27.052215] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.642 [2024-12-15 05:54:27.052219] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1414220) on tqpair=0x13db540 00:14:05.642 [2024-12-15 05:54:27.052225] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:14:05.642 [2024-12-15 05:54:27.052233] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:14:05.642 [2024-12-15 05:54:27.052240] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.642 [2024-12-15 05:54:27.052244] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.642 [2024-12-15 05:54:27.052265] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13db540) 00:14:05.642 [2024-12-15 05:54:27.052289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.642 [2024-12-15 05:54:27.052306] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1414220, cid 0, qid 0 00:14:05.642 [2024-12-15 05:54:27.052370] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.642 [2024-12-15 05:54:27.052377] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.642 [2024-12-15 05:54:27.052381] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.642 [2024-12-15 05:54:27.052385] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1414220) on tqpair=0x13db540 00:14:05.642 [2024-12-15 05:54:27.052392] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:14:05.642 [2024-12-15 05:54:27.052401] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:14:05.642 [2024-12-15 05:54:27.052408] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.642 [2024-12-15 05:54:27.052413] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.642 [2024-12-15 05:54:27.052416] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13db540) 00:14:05.642 [2024-12-15 05:54:27.052424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.642 [2024-12-15 05:54:27.052440] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1414220, cid 0, qid 0 00:14:05.642 [2024-12-15 05:54:27.052499] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.642 [2024-12-15 05:54:27.052508] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.642 [2024-12-15 05:54:27.052511] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.642 [2024-12-15 05:54:27.052516] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1414220) on tqpair=0x13db540 00:14:05.642 [2024-12-15 05:54:27.052523] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:05.642 [2024-12-15 05:54:27.052533] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.642 [2024-12-15 05:54:27.052538] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.642 [2024-12-15 05:54:27.052549] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13db540) 00:14:05.642 [2024-12-15 05:54:27.052556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.642 [2024-12-15 05:54:27.052573] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1414220, cid 0, qid 0 00:14:05.642 [2024-12-15 05:54:27.052625] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.642 [2024-12-15 05:54:27.052632] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.642 [2024-12-15 05:54:27.052636] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.642 [2024-12-15 05:54:27.052640] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1414220) on tqpair=0x13db540 00:14:05.642 [2024-12-15 05:54:27.052646] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:14:05.642 [2024-12-15 05:54:27.052652] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:14:05.642 [2024-12-15 05:54:27.052660] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:05.642 [2024-12-15 05:54:27.052765] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:14:05.642 [2024-12-15 05:54:27.052771] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:05.642 [2024-12-15 05:54:27.052780] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.642 [2024-12-15 05:54:27.052785] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.642 [2024-12-15 05:54:27.052789] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13db540) 00:14:05.642 [2024-12-15 05:54:27.052796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.642 [2024-12-15 05:54:27.052814] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1414220, cid 0, qid 0 00:14:05.642 [2024-12-15 05:54:27.052878] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.642 [2024-12-15 05:54:27.052886] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.642 [2024-12-15 05:54:27.052890] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.642 [2024-12-15 05:54:27.052895] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1414220) on tqpair=0x13db540 00:14:05.642 [2024-12-15 05:54:27.052914] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:05.642 [2024-12-15 05:54:27.052927] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.642 [2024-12-15 05:54:27.052932] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.642 [2024-12-15 05:54:27.052936] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13db540) 00:14:05.642 [2024-12-15 05:54:27.052944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.642 [2024-12-15 05:54:27.052963] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1414220, cid 0, qid 0 00:14:05.642 [2024-12-15 05:54:27.053031] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.642 [2024-12-15 05:54:27.053038] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.642 [2024-12-15 05:54:27.053041] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.642 [2024-12-15 05:54:27.053046] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1414220) on tqpair=0x13db540 00:14:05.642 [2024-12-15 05:54:27.053052] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:05.642 [2024-12-15 05:54:27.053057] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:14:05.642 [2024-12-15 05:54:27.053065] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:14:05.642 [2024-12-15 05:54:27.053081] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:14:05.642 [2024-12-15 05:54:27.053092] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.642 [2024-12-15 05:54:27.053097] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.642 [2024-12-15 05:54:27.053101] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13db540) 00:14:05.642 [2024-12-15 05:54:27.053109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.642 [2024-12-15 05:54:27.053127] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1414220, cid 0, qid 0 00:14:05.642 [2024-12-15 05:54:27.053218] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.642 [2024-12-15 05:54:27.053234] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.643 [2024-12-15 05:54:27.053239] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.053243] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13db540): datao=0, datal=4096, cccid=0 00:14:05.643 [2024-12-15 05:54:27.053249] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1414220) on tqpair(0x13db540): expected_datao=0, payload_size=4096 00:14:05.643 [2024-12-15 05:54:27.053258] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.053264] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.053273] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.643 [2024-12-15 05:54:27.053279] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.643 [2024-12-15 05:54:27.053283] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.053287] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1414220) on tqpair=0x13db540 00:14:05.643 [2024-12-15 05:54:27.053297] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:14:05.643 [2024-12-15 05:54:27.053303] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:14:05.643 [2024-12-15 05:54:27.053308] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:14:05.643 [2024-12-15 05:54:27.053313] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:14:05.643 [2024-12-15 05:54:27.053319] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:14:05.643 [2024-12-15 05:54:27.053324] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:14:05.643 [2024-12-15 05:54:27.053338] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:14:05.643 [2024-12-15 05:54:27.053346] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.053351] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.053355] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13db540) 00:14:05.643 [2024-12-15 05:54:27.053364] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:05.643 [2024-12-15 05:54:27.053384] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1414220, cid 0, qid 0 00:14:05.643 [2024-12-15 05:54:27.053452] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.643 [2024-12-15 05:54:27.053459] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.643 [2024-12-15 05:54:27.053463] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.053467] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1414220) on tqpair=0x13db540 00:14:05.643 [2024-12-15 05:54:27.053476] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.053480] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.053484] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13db540) 00:14:05.643 [2024-12-15 05:54:27.053491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.643 [2024-12-15 05:54:27.053498] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.053502] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.053506] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x13db540) 00:14:05.643 [2024-12-15 05:54:27.053512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.643 [2024-12-15 05:54:27.053518] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.053522] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.053526] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x13db540) 00:14:05.643 [2024-12-15 05:54:27.053532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.643 [2024-12-15 05:54:27.053538] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.053542] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.053546] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13db540) 00:14:05.643 [2024-12-15 05:54:27.053552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.643 [2024-12-15 05:54:27.053558] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:14:05.643 [2024-12-15 05:54:27.053570] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:05.643 [2024-12-15 05:54:27.053578] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.053582] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.053586] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13db540) 00:14:05.643 [2024-12-15 05:54:27.053593] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.643 [2024-12-15 05:54:27.053612] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1414220, cid 0, qid 0 00:14:05.643 [2024-12-15 05:54:27.053619] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1414380, cid 1, qid 0 00:14:05.643 [2024-12-15 05:54:27.053624] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14144e0, cid 2, qid 0 00:14:05.643 [2024-12-15 05:54:27.053629] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1414640, cid 3, qid 0 00:14:05.643 [2024-12-15 05:54:27.053634] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14147a0, cid 4, qid 0 00:14:05.643 [2024-12-15 05:54:27.053749] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.643 [2024-12-15 05:54:27.053756] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.643 [2024-12-15 05:54:27.053760] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.053765] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14147a0) on tqpair=0x13db540 00:14:05.643 [2024-12-15 05:54:27.053771] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:14:05.643 [2024-12-15 05:54:27.053777] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:14:05.643 [2024-12-15 05:54:27.053788] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.053793] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.053797] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13db540) 00:14:05.643 [2024-12-15 05:54:27.053804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.643 [2024-12-15 05:54:27.053821] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14147a0, cid 4, qid 0 00:14:05.643 [2024-12-15 05:54:27.053916] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.643 [2024-12-15 05:54:27.053925] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.643 [2024-12-15 05:54:27.053929] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.053933] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13db540): datao=0, datal=4096, cccid=4 00:14:05.643 [2024-12-15 05:54:27.053938] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14147a0) on tqpair(0x13db540): expected_datao=0, payload_size=4096 00:14:05.643 [2024-12-15 05:54:27.053947] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.053951] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.053960] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.643 [2024-12-15 05:54:27.053966] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.643 [2024-12-15 05:54:27.053970] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.053974] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14147a0) on tqpair=0x13db540 00:14:05.643 [2024-12-15 05:54:27.053988] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:14:05.643 [2024-12-15 05:54:27.054014] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.054021] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.054025] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13db540) 00:14:05.643 [2024-12-15 05:54:27.054034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.643 [2024-12-15 05:54:27.054041] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.054045] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.054049] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13db540) 00:14:05.643 [2024-12-15 05:54:27.054056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.643 [2024-12-15 05:54:27.054080] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14147a0, cid 4, qid 0 00:14:05.643 [2024-12-15 05:54:27.054088] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1414900, cid 5, qid 0 00:14:05.643 [2024-12-15 05:54:27.054221] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.643 [2024-12-15 05:54:27.054228] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.643 [2024-12-15 05:54:27.054232] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.054236] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13db540): datao=0, datal=1024, cccid=4 00:14:05.643 [2024-12-15 05:54:27.054241] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14147a0) on tqpair(0x13db540): expected_datao=0, payload_size=1024 00:14:05.643 [2024-12-15 05:54:27.054249] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.054253] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.054259] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.643 [2024-12-15 05:54:27.054265] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.643 [2024-12-15 05:54:27.054269] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.054273] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1414900) on tqpair=0x13db540 00:14:05.643 [2024-12-15 05:54:27.054291] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.643 [2024-12-15 05:54:27.054299] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.643 [2024-12-15 05:54:27.054302] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.054307] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14147a0) on tqpair=0x13db540 00:14:05.643 [2024-12-15 05:54:27.054324] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.643 [2024-12-15 05:54:27.054330] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.644 [2024-12-15 05:54:27.054334] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13db540) 00:14:05.644 [2024-12-15 05:54:27.054341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.644 [2024-12-15 05:54:27.054365] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14147a0, cid 4, qid 0 00:14:05.644 [2024-12-15 05:54:27.054449] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.644 [2024-12-15 05:54:27.054461] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.644 [2024-12-15 05:54:27.054466] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.644 [2024-12-15 05:54:27.054470] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13db540): datao=0, datal=3072, cccid=4 00:14:05.644 [2024-12-15 05:54:27.054475] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14147a0) on tqpair(0x13db540): expected_datao=0, payload_size=3072 00:14:05.644 [2024-12-15 05:54:27.054483] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.644 [2024-12-15 05:54:27.054487] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.644 [2024-12-15 05:54:27.054495] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.644 [2024-12-15 05:54:27.054502] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.644 [2024-12-15 05:54:27.054506] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.644 [2024-12-15 05:54:27.054510] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14147a0) on tqpair=0x13db540 00:14:05.644 [2024-12-15 05:54:27.054521] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.644 [2024-12-15 05:54:27.054526] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.644 [2024-12-15 05:54:27.054530] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13db540) 00:14:05.644 [2024-12-15 05:54:27.054537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.644 [2024-12-15 05:54:27.054560] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14147a0, cid 4, qid 0 00:14:05.644 [2024-12-15 05:54:27.054641] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.644 [2024-12-15 05:54:27.054648] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.644 [2024-12-15 05:54:27.054652] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.644 [2024-12-15 05:54:27.054656] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13db540): datao=0, datal=8, cccid=4 00:14:05.644 [2024-12-15 05:54:27.054661] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14147a0) on tqpair(0x13db540): expected_datao=0, payload_size=8 00:14:05.644 [2024-12-15 05:54:27.054668] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.644 [2024-12-15 05:54:27.054672] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.644 [2024-12-15 05:54:27.054687] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.644 [2024-12-15 05:54:27.054695] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.644 [2024-12-15 05:54:27.054699] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.644 [2024-12-15 05:54:27.054703] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14147a0) on tqpair=0x13db540 00:14:05.644 ===================================================== 00:14:05.644 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:05.644 ===================================================== 00:14:05.644 Controller Capabilities/Features 00:14:05.644 ================================ 00:14:05.644 Vendor ID: 0000 00:14:05.644 Subsystem Vendor ID: 0000 00:14:05.644 Serial Number: .................... 00:14:05.644 Model Number: ........................................ 00:14:05.644 Firmware Version: 24.01.1 00:14:05.644 Recommended Arb Burst: 0 00:14:05.644 IEEE OUI Identifier: 00 00 00 00:14:05.644 Multi-path I/O 00:14:05.644 May have multiple subsystem ports: No 00:14:05.644 May have multiple controllers: No 00:14:05.644 Associated with SR-IOV VF: No 00:14:05.644 Max Data Transfer Size: 131072 00:14:05.644 Max Number of Namespaces: 0 00:14:05.644 Max Number of I/O Queues: 1024 00:14:05.644 NVMe Specification Version (VS): 1.3 00:14:05.644 NVMe Specification Version (Identify): 1.3 00:14:05.644 Maximum Queue Entries: 128 00:14:05.644 Contiguous Queues Required: Yes 00:14:05.644 Arbitration Mechanisms Supported 00:14:05.644 Weighted Round Robin: Not Supported 00:14:05.644 Vendor Specific: Not Supported 00:14:05.644 Reset Timeout: 15000 ms 00:14:05.644 Doorbell Stride: 4 bytes 00:14:05.644 NVM Subsystem Reset: Not Supported 00:14:05.644 Command Sets Supported 00:14:05.644 NVM Command Set: Supported 00:14:05.644 Boot Partition: Not Supported 00:14:05.644 Memory Page Size Minimum: 4096 bytes 00:14:05.644 Memory Page Size Maximum: 4096 bytes 00:14:05.644 Persistent Memory Region: Not Supported 00:14:05.644 Optional Asynchronous Events Supported 00:14:05.644 Namespace Attribute Notices: Not Supported 00:14:05.644 Firmware Activation Notices: Not Supported 00:14:05.644 ANA Change Notices: Not Supported 00:14:05.644 PLE Aggregate Log Change Notices: Not Supported 00:14:05.644 LBA Status Info Alert Notices: Not Supported 00:14:05.644 EGE Aggregate Log Change Notices: Not Supported 00:14:05.644 Normal NVM Subsystem Shutdown event: Not Supported 00:14:05.644 Zone Descriptor Change Notices: Not Supported 00:14:05.644 Discovery Log Change Notices: Supported 00:14:05.644 Controller Attributes 00:14:05.644 128-bit Host Identifier: Not Supported 00:14:05.644 Non-Operational Permissive Mode: Not Supported 00:14:05.644 NVM Sets: Not Supported 00:14:05.644 Read Recovery Levels: Not Supported 00:14:05.644 Endurance Groups: Not Supported 00:14:05.644 Predictable Latency Mode: Not Supported 00:14:05.644 Traffic Based Keep ALive: Not Supported 00:14:05.644 Namespace Granularity: Not Supported 00:14:05.644 SQ Associations: Not Supported 00:14:05.644 UUID List: Not Supported 00:14:05.644 Multi-Domain Subsystem: Not Supported 00:14:05.644 Fixed Capacity Management: Not Supported 00:14:05.644 Variable Capacity Management: Not Supported 00:14:05.644 Delete Endurance Group: Not Supported 00:14:05.644 Delete NVM Set: Not Supported 00:14:05.644 Extended LBA Formats Supported: Not Supported 00:14:05.644 Flexible Data Placement Supported: Not Supported 00:14:05.644 00:14:05.644 Controller Memory Buffer Support 00:14:05.644 ================================ 00:14:05.644 Supported: No 00:14:05.644 00:14:05.644 Persistent Memory Region Support 00:14:05.644 ================================ 00:14:05.644 Supported: No 00:14:05.644 00:14:05.644 Admin Command Set Attributes 00:14:05.644 ============================ 00:14:05.644 Security Send/Receive: Not Supported 00:14:05.644 Format NVM: Not Supported 00:14:05.644 Firmware Activate/Download: Not Supported 00:14:05.644 Namespace Management: Not Supported 00:14:05.644 Device Self-Test: Not Supported 00:14:05.644 Directives: Not Supported 00:14:05.644 NVMe-MI: Not Supported 00:14:05.644 Virtualization Management: Not Supported 00:14:05.644 Doorbell Buffer Config: Not Supported 00:14:05.644 Get LBA Status Capability: Not Supported 00:14:05.644 Command & Feature Lockdown Capability: Not Supported 00:14:05.644 Abort Command Limit: 1 00:14:05.644 Async Event Request Limit: 4 00:14:05.644 Number of Firmware Slots: N/A 00:14:05.644 Firmware Slot 1 Read-Only: N/A 00:14:05.644 Firmware Activation Without Reset: N/A 00:14:05.644 Multiple Update Detection Support: N/A 00:14:05.644 Firmware Update Granularity: No Information Provided 00:14:05.644 Per-Namespace SMART Log: No 00:14:05.644 Asymmetric Namespace Access Log Page: Not Supported 00:14:05.644 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:05.644 Command Effects Log Page: Not Supported 00:14:05.644 Get Log Page Extended Data: Supported 00:14:05.644 Telemetry Log Pages: Not Supported 00:14:05.644 Persistent Event Log Pages: Not Supported 00:14:05.644 Supported Log Pages Log Page: May Support 00:14:05.644 Commands Supported & Effects Log Page: Not Supported 00:14:05.644 Feature Identifiers & Effects Log Page:May Support 00:14:05.644 NVMe-MI Commands & Effects Log Page: May Support 00:14:05.644 Data Area 4 for Telemetry Log: Not Supported 00:14:05.644 Error Log Page Entries Supported: 128 00:14:05.644 Keep Alive: Not Supported 00:14:05.644 00:14:05.644 NVM Command Set Attributes 00:14:05.644 ========================== 00:14:05.644 Submission Queue Entry Size 00:14:05.644 Max: 1 00:14:05.644 Min: 1 00:14:05.644 Completion Queue Entry Size 00:14:05.644 Max: 1 00:14:05.644 Min: 1 00:14:05.644 Number of Namespaces: 0 00:14:05.644 Compare Command: Not Supported 00:14:05.644 Write Uncorrectable Command: Not Supported 00:14:05.644 Dataset Management Command: Not Supported 00:14:05.644 Write Zeroes Command: Not Supported 00:14:05.644 Set Features Save Field: Not Supported 00:14:05.644 Reservations: Not Supported 00:14:05.644 Timestamp: Not Supported 00:14:05.644 Copy: Not Supported 00:14:05.644 Volatile Write Cache: Not Present 00:14:05.644 Atomic Write Unit (Normal): 1 00:14:05.644 Atomic Write Unit (PFail): 1 00:14:05.645 Atomic Compare & Write Unit: 1 00:14:05.645 Fused Compare & Write: Supported 00:14:05.645 Scatter-Gather List 00:14:05.645 SGL Command Set: Supported 00:14:05.645 SGL Keyed: Supported 00:14:05.645 SGL Bit Bucket Descriptor: Not Supported 00:14:05.645 SGL Metadata Pointer: Not Supported 00:14:05.645 Oversized SGL: Not Supported 00:14:05.645 SGL Metadata Address: Not Supported 00:14:05.645 SGL Offset: Supported 00:14:05.645 Transport SGL Data Block: Not Supported 00:14:05.645 Replay Protected Memory Block: Not Supported 00:14:05.645 00:14:05.645 Firmware Slot Information 00:14:05.645 ========================= 00:14:05.645 Active slot: 0 00:14:05.645 00:14:05.645 00:14:05.645 Error Log 00:14:05.645 ========= 00:14:05.645 00:14:05.645 Active Namespaces 00:14:05.645 ================= 00:14:05.645 Discovery Log Page 00:14:05.645 ================== 00:14:05.645 Generation Counter: 2 00:14:05.645 Number of Records: 2 00:14:05.645 Record Format: 0 00:14:05.645 00:14:05.645 Discovery Log Entry 0 00:14:05.645 ---------------------- 00:14:05.645 Transport Type: 3 (TCP) 00:14:05.645 Address Family: 1 (IPv4) 00:14:05.645 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:05.645 Entry Flags: 00:14:05.645 Duplicate Returned Information: 1 00:14:05.645 Explicit Persistent Connection Support for Discovery: 1 00:14:05.645 Transport Requirements: 00:14:05.645 Secure Channel: Not Required 00:14:05.645 Port ID: 0 (0x0000) 00:14:05.645 Controller ID: 65535 (0xffff) 00:14:05.645 Admin Max SQ Size: 128 00:14:05.645 Transport Service Identifier: 4420 00:14:05.645 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:05.645 Transport Address: 10.0.0.2 00:14:05.645 Discovery Log Entry 1 00:14:05.645 ---------------------- 00:14:05.645 Transport Type: 3 (TCP) 00:14:05.645 Address Family: 1 (IPv4) 00:14:05.645 Subsystem Type: 2 (NVM Subsystem) 00:14:05.645 Entry Flags: 00:14:05.645 Duplicate Returned Information: 0 00:14:05.645 Explicit Persistent Connection Support for Discovery: 0 00:14:05.645 Transport Requirements: 00:14:05.645 Secure Channel: Not Required 00:14:05.645 Port ID: 0 (0x0000) 00:14:05.645 Controller ID: 65535 (0xffff) 00:14:05.645 Admin Max SQ Size: 128 00:14:05.645 Transport Service Identifier: 4420 00:14:05.645 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:05.645 Transport Address: 10.0.0.2 [2024-12-15 05:54:27.054821] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:14:05.645 [2024-12-15 05:54:27.054841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.645 [2024-12-15 05:54:27.054849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.645 [2024-12-15 05:54:27.054855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.645 [2024-12-15 05:54:27.054862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.645 [2024-12-15 05:54:27.054902] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.645 [2024-12-15 05:54:27.054908] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.645 [2024-12-15 05:54:27.054913] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13db540) 00:14:05.645 [2024-12-15 05:54:27.054921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.645 [2024-12-15 05:54:27.054948] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1414640, cid 3, qid 0 00:14:05.645 [2024-12-15 05:54:27.055009] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.645 [2024-12-15 05:54:27.055017] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.645 [2024-12-15 05:54:27.055021] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.645 [2024-12-15 05:54:27.055025] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1414640) on tqpair=0x13db540 00:14:05.645 [2024-12-15 05:54:27.055035] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.645 [2024-12-15 05:54:27.055039] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.645 [2024-12-15 05:54:27.055044] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13db540) 00:14:05.645 [2024-12-15 05:54:27.055051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.645 [2024-12-15 05:54:27.055073] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1414640, cid 3, qid 0 00:14:05.645 [2024-12-15 05:54:27.055156] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.645 [2024-12-15 05:54:27.055163] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.645 [2024-12-15 05:54:27.055167] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.645 [2024-12-15 05:54:27.055171] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1414640) on tqpair=0x13db540 00:14:05.645 [2024-12-15 05:54:27.055188] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:14:05.645 [2024-12-15 05:54:27.055194] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:14:05.645 [2024-12-15 05:54:27.055205] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.645 [2024-12-15 05:54:27.055210] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.645 [2024-12-15 05:54:27.055214] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13db540) 00:14:05.645 [2024-12-15 05:54:27.055222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.645 [2024-12-15 05:54:27.055240] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1414640, cid 3, qid 0 00:14:05.645 [2024-12-15 05:54:27.055308] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.645 [2024-12-15 05:54:27.055320] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.645 [2024-12-15 05:54:27.055325] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.645 [2024-12-15 05:54:27.055329] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1414640) on tqpair=0x13db540 00:14:05.645 [2024-12-15 05:54:27.055342] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.645 [2024-12-15 05:54:27.055348] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.645 [2024-12-15 05:54:27.055352] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13db540) 00:14:05.645 [2024-12-15 05:54:27.055360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.645 [2024-12-15 05:54:27.055377] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1414640, cid 3, qid 0 00:14:05.645 [2024-12-15 05:54:27.055454] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.645 [2024-12-15 05:54:27.055461] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.645 [2024-12-15 05:54:27.055465] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.645 [2024-12-15 05:54:27.055469] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1414640) on tqpair=0x13db540 00:14:05.645 [2024-12-15 05:54:27.055481] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.645 [2024-12-15 05:54:27.055486] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.645 [2024-12-15 05:54:27.055490] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13db540) 00:14:05.645 [2024-12-15 05:54:27.055497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.645 [2024-12-15 05:54:27.055525] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1414640, cid 3, qid 0 00:14:05.645 [2024-12-15 05:54:27.055581] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.645 [2024-12-15 05:54:27.055589] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.645 [2024-12-15 05:54:27.055593] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.645 [2024-12-15 05:54:27.055597] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1414640) on tqpair=0x13db540 00:14:05.645 [2024-12-15 05:54:27.055621] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.645 [2024-12-15 05:54:27.055626] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.645 [2024-12-15 05:54:27.055630] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13db540) 00:14:05.645 [2024-12-15 05:54:27.055638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.645 [2024-12-15 05:54:27.055654] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1414640, cid 3, qid 0 00:14:05.645 [2024-12-15 05:54:27.055721] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.645 [2024-12-15 05:54:27.055728] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.645 [2024-12-15 05:54:27.055732] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.645 [2024-12-15 05:54:27.055736] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1414640) on tqpair=0x13db540 00:14:05.645 [2024-12-15 05:54:27.055748] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.645 [2024-12-15 05:54:27.055753] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.645 [2024-12-15 05:54:27.055757] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13db540) 00:14:05.645 [2024-12-15 05:54:27.055765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.645 [2024-12-15 05:54:27.055781] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1414640, cid 3, qid 0 00:14:05.645 [2024-12-15 05:54:27.055854] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.645 [2024-12-15 05:54:27.055865] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.645 [2024-12-15 05:54:27.059888] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.645 [2024-12-15 05:54:27.059904] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1414640) on tqpair=0x13db540 00:14:05.645 [2024-12-15 05:54:27.059921] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.645 [2024-12-15 05:54:27.059927] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.645 [2024-12-15 05:54:27.059931] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13db540) 00:14:05.646 [2024-12-15 05:54:27.059940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.646 [2024-12-15 05:54:27.059965] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1414640, cid 3, qid 0 00:14:05.646 [2024-12-15 05:54:27.060029] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.646 [2024-12-15 05:54:27.060036] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.646 [2024-12-15 05:54:27.060040] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.646 [2024-12-15 05:54:27.060044] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1414640) on tqpair=0x13db540 00:14:05.646 [2024-12-15 05:54:27.060054] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:14:05.646 00:14:05.646 05:54:27 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:05.646 [2024-12-15 05:54:27.094480] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:05.646 [2024-12-15 05:54:27.094533] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80041 ] 00:14:05.646 [2024-12-15 05:54:27.233479] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:14:05.646 [2024-12-15 05:54:27.233548] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:05.646 [2024-12-15 05:54:27.233555] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:05.646 [2024-12-15 05:54:27.233566] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:05.646 [2024-12-15 05:54:27.233577] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:14:05.646 [2024-12-15 05:54:27.233716] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:14:05.646 [2024-12-15 05:54:27.233769] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x9fd540 0 00:14:05.646 [2024-12-15 05:54:27.236972] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:05.646 [2024-12-15 05:54:27.237025] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:05.646 [2024-12-15 05:54:27.237031] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:05.646 [2024-12-15 05:54:27.237051] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:05.646 [2024-12-15 05:54:27.237091] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.646 [2024-12-15 05:54:27.237099] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.646 [2024-12-15 05:54:27.237103] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9fd540) 00:14:05.646 [2024-12-15 05:54:27.237116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:05.646 [2024-12-15 05:54:27.237144] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36220, cid 0, qid 0 00:14:05.646 [2024-12-15 05:54:27.244014] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.646 [2024-12-15 05:54:27.244034] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.646 [2024-12-15 05:54:27.244039] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.646 [2024-12-15 05:54:27.244060] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36220) on tqpair=0x9fd540 00:14:05.646 [2024-12-15 05:54:27.244072] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:05.646 [2024-12-15 05:54:27.244079] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:14:05.646 [2024-12-15 05:54:27.244085] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:14:05.646 [2024-12-15 05:54:27.244099] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.646 [2024-12-15 05:54:27.244104] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.646 [2024-12-15 05:54:27.244108] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9fd540) 00:14:05.646 [2024-12-15 05:54:27.244116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.646 [2024-12-15 05:54:27.244141] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36220, cid 0, qid 0 00:14:05.646 [2024-12-15 05:54:27.244196] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.646 [2024-12-15 05:54:27.244203] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.646 [2024-12-15 05:54:27.244207] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.646 [2024-12-15 05:54:27.244211] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36220) on tqpair=0x9fd540 00:14:05.646 [2024-12-15 05:54:27.244216] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:14:05.646 [2024-12-15 05:54:27.244224] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:14:05.646 [2024-12-15 05:54:27.244231] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.646 [2024-12-15 05:54:27.244235] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.646 [2024-12-15 05:54:27.244239] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9fd540) 00:14:05.646 [2024-12-15 05:54:27.244247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.646 [2024-12-15 05:54:27.244263] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36220, cid 0, qid 0 00:14:05.646 [2024-12-15 05:54:27.244342] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.646 [2024-12-15 05:54:27.244348] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.646 [2024-12-15 05:54:27.244352] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.646 [2024-12-15 05:54:27.244356] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36220) on tqpair=0x9fd540 00:14:05.646 [2024-12-15 05:54:27.244362] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:14:05.646 [2024-12-15 05:54:27.244371] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:14:05.646 [2024-12-15 05:54:27.244379] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.646 [2024-12-15 05:54:27.244383] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.646 [2024-12-15 05:54:27.244387] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9fd540) 00:14:05.646 [2024-12-15 05:54:27.244394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.646 [2024-12-15 05:54:27.244412] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36220, cid 0, qid 0 00:14:05.646 [2024-12-15 05:54:27.244460] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.646 [2024-12-15 05:54:27.244467] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.646 [2024-12-15 05:54:27.244470] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.646 [2024-12-15 05:54:27.244475] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36220) on tqpair=0x9fd540 00:14:05.646 [2024-12-15 05:54:27.244481] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:05.646 [2024-12-15 05:54:27.244491] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.646 [2024-12-15 05:54:27.244495] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.646 [2024-12-15 05:54:27.244499] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9fd540) 00:14:05.646 [2024-12-15 05:54:27.244507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.646 [2024-12-15 05:54:27.244523] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36220, cid 0, qid 0 00:14:05.646 [2024-12-15 05:54:27.244568] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.646 [2024-12-15 05:54:27.244580] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.646 [2024-12-15 05:54:27.244584] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.646 [2024-12-15 05:54:27.244588] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36220) on tqpair=0x9fd540 00:14:05.646 [2024-12-15 05:54:27.244594] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:14:05.647 [2024-12-15 05:54:27.244616] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:14:05.647 [2024-12-15 05:54:27.244625] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:05.647 [2024-12-15 05:54:27.244731] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:14:05.647 [2024-12-15 05:54:27.244735] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:05.647 [2024-12-15 05:54:27.244745] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.647 [2024-12-15 05:54:27.244749] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.647 [2024-12-15 05:54:27.244753] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9fd540) 00:14:05.647 [2024-12-15 05:54:27.244761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.647 [2024-12-15 05:54:27.244780] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36220, cid 0, qid 0 00:14:05.647 [2024-12-15 05:54:27.244839] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.647 [2024-12-15 05:54:27.244865] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.647 [2024-12-15 05:54:27.244869] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.647 [2024-12-15 05:54:27.244885] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36220) on tqpair=0x9fd540 00:14:05.647 [2024-12-15 05:54:27.244892] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:05.647 [2024-12-15 05:54:27.244904] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.647 [2024-12-15 05:54:27.244909] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.647 [2024-12-15 05:54:27.244913] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9fd540) 00:14:05.647 [2024-12-15 05:54:27.244921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.647 [2024-12-15 05:54:27.244941] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36220, cid 0, qid 0 00:14:05.647 [2024-12-15 05:54:27.244998] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.647 [2024-12-15 05:54:27.245005] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.647 [2024-12-15 05:54:27.245009] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.647 [2024-12-15 05:54:27.245013] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36220) on tqpair=0x9fd540 00:14:05.647 [2024-12-15 05:54:27.245019] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:05.647 [2024-12-15 05:54:27.245024] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:14:05.647 [2024-12-15 05:54:27.245033] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:14:05.647 [2024-12-15 05:54:27.245049] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:14:05.647 [2024-12-15 05:54:27.245059] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.647 [2024-12-15 05:54:27.245063] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.647 [2024-12-15 05:54:27.245067] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9fd540) 00:14:05.647 [2024-12-15 05:54:27.245076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.647 [2024-12-15 05:54:27.245094] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36220, cid 0, qid 0 00:14:05.647 [2024-12-15 05:54:27.245192] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.647 [2024-12-15 05:54:27.245208] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.647 [2024-12-15 05:54:27.245213] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.647 [2024-12-15 05:54:27.245217] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9fd540): datao=0, datal=4096, cccid=0 00:14:05.647 [2024-12-15 05:54:27.245223] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa36220) on tqpair(0x9fd540): expected_datao=0, payload_size=4096 00:14:05.647 [2024-12-15 05:54:27.245232] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.647 [2024-12-15 05:54:27.245237] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.647 [2024-12-15 05:54:27.245246] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.647 [2024-12-15 05:54:27.245253] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.647 [2024-12-15 05:54:27.245257] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.647 [2024-12-15 05:54:27.245261] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36220) on tqpair=0x9fd540 00:14:05.647 [2024-12-15 05:54:27.245284] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:14:05.647 [2024-12-15 05:54:27.245290] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:14:05.647 [2024-12-15 05:54:27.245295] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:14:05.647 [2024-12-15 05:54:27.245300] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:14:05.647 [2024-12-15 05:54:27.245305] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:14:05.647 [2024-12-15 05:54:27.245310] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:14:05.647 [2024-12-15 05:54:27.245324] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:14:05.647 [2024-12-15 05:54:27.245333] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.647 [2024-12-15 05:54:27.245337] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.647 [2024-12-15 05:54:27.245341] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9fd540) 00:14:05.647 [2024-12-15 05:54:27.245350] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:05.647 [2024-12-15 05:54:27.245370] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36220, cid 0, qid 0 00:14:05.647 [2024-12-15 05:54:27.245425] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.647 [2024-12-15 05:54:27.245432] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.647 [2024-12-15 05:54:27.245436] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.647 [2024-12-15 05:54:27.245440] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36220) on tqpair=0x9fd540 00:14:05.647 [2024-12-15 05:54:27.245448] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.647 [2024-12-15 05:54:27.245452] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.647 [2024-12-15 05:54:27.245456] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9fd540) 00:14:05.647 [2024-12-15 05:54:27.245463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.647 [2024-12-15 05:54:27.245470] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.647 [2024-12-15 05:54:27.245474] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.647 [2024-12-15 05:54:27.245477] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x9fd540) 00:14:05.647 [2024-12-15 05:54:27.245483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.647 [2024-12-15 05:54:27.245490] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.647 [2024-12-15 05:54:27.245493] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.647 [2024-12-15 05:54:27.245497] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x9fd540) 00:14:05.647 [2024-12-15 05:54:27.245503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.647 [2024-12-15 05:54:27.245509] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.647 [2024-12-15 05:54:27.245513] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.647 [2024-12-15 05:54:27.245517] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.647 [2024-12-15 05:54:27.245523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.647 [2024-12-15 05:54:27.245529] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:05.647 [2024-12-15 05:54:27.245542] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:05.647 [2024-12-15 05:54:27.245549] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.647 [2024-12-15 05:54:27.245553] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.647 [2024-12-15 05:54:27.245557] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9fd540) 00:14:05.647 [2024-12-15 05:54:27.245564] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.647 [2024-12-15 05:54:27.245584] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36220, cid 0, qid 0 00:14:05.647 [2024-12-15 05:54:27.245591] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36380, cid 1, qid 0 00:14:05.647 [2024-12-15 05:54:27.245596] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa364e0, cid 2, qid 0 00:14:05.647 [2024-12-15 05:54:27.245617] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.647 [2024-12-15 05:54:27.245622] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa367a0, cid 4, qid 0 00:14:05.647 [2024-12-15 05:54:27.245724] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.647 [2024-12-15 05:54:27.245736] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.647 [2024-12-15 05:54:27.245740] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.647 [2024-12-15 05:54:27.245745] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa367a0) on tqpair=0x9fd540 00:14:05.647 [2024-12-15 05:54:27.245751] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:14:05.647 [2024-12-15 05:54:27.245757] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:05.647 [2024-12-15 05:54:27.245766] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:14:05.647 [2024-12-15 05:54:27.245777] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:05.648 [2024-12-15 05:54:27.245785] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.245790] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.245794] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9fd540) 00:14:05.648 [2024-12-15 05:54:27.245802] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:05.648 [2024-12-15 05:54:27.245821] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa367a0, cid 4, qid 0 00:14:05.648 [2024-12-15 05:54:27.245886] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.648 [2024-12-15 05:54:27.245894] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.648 [2024-12-15 05:54:27.245898] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.245903] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa367a0) on tqpair=0x9fd540 00:14:05.648 [2024-12-15 05:54:27.245967] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:14:05.648 [2024-12-15 05:54:27.245993] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:05.648 [2024-12-15 05:54:27.246002] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.246006] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.246010] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9fd540) 00:14:05.648 [2024-12-15 05:54:27.246017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.648 [2024-12-15 05:54:27.246037] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa367a0, cid 4, qid 0 00:14:05.648 [2024-12-15 05:54:27.246103] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.648 [2024-12-15 05:54:27.246110] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.648 [2024-12-15 05:54:27.246114] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.246118] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9fd540): datao=0, datal=4096, cccid=4 00:14:05.648 [2024-12-15 05:54:27.246123] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa367a0) on tqpair(0x9fd540): expected_datao=0, payload_size=4096 00:14:05.648 [2024-12-15 05:54:27.246131] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.246141] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.246150] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.648 [2024-12-15 05:54:27.246156] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.648 [2024-12-15 05:54:27.246159] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.246163] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa367a0) on tqpair=0x9fd540 00:14:05.648 [2024-12-15 05:54:27.246179] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:14:05.648 [2024-12-15 05:54:27.246189] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:14:05.648 [2024-12-15 05:54:27.246200] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:14:05.648 [2024-12-15 05:54:27.246208] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.246213] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.246217] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9fd540) 00:14:05.648 [2024-12-15 05:54:27.246224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.648 [2024-12-15 05:54:27.246244] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa367a0, cid 4, qid 0 00:14:05.648 [2024-12-15 05:54:27.246318] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.648 [2024-12-15 05:54:27.246325] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.648 [2024-12-15 05:54:27.246329] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.246332] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9fd540): datao=0, datal=4096, cccid=4 00:14:05.648 [2024-12-15 05:54:27.246337] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa367a0) on tqpair(0x9fd540): expected_datao=0, payload_size=4096 00:14:05.648 [2024-12-15 05:54:27.246345] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.246349] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.246358] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.648 [2024-12-15 05:54:27.246364] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.648 [2024-12-15 05:54:27.246368] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.246372] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa367a0) on tqpair=0x9fd540 00:14:05.648 [2024-12-15 05:54:27.246388] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:05.648 [2024-12-15 05:54:27.246399] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:05.648 [2024-12-15 05:54:27.246407] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.246411] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.246415] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9fd540) 00:14:05.648 [2024-12-15 05:54:27.246423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.648 [2024-12-15 05:54:27.246442] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa367a0, cid 4, qid 0 00:14:05.648 [2024-12-15 05:54:27.246499] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.648 [2024-12-15 05:54:27.246506] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.648 [2024-12-15 05:54:27.246510] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.246514] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9fd540): datao=0, datal=4096, cccid=4 00:14:05.648 [2024-12-15 05:54:27.246519] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa367a0) on tqpair(0x9fd540): expected_datao=0, payload_size=4096 00:14:05.648 [2024-12-15 05:54:27.246527] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.246531] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.246539] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.648 [2024-12-15 05:54:27.246545] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.648 [2024-12-15 05:54:27.246549] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.246553] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa367a0) on tqpair=0x9fd540 00:14:05.648 [2024-12-15 05:54:27.246561] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:05.648 [2024-12-15 05:54:27.246570] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:14:05.648 [2024-12-15 05:54:27.246581] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:14:05.648 [2024-12-15 05:54:27.246587] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:05.648 [2024-12-15 05:54:27.246593] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:14:05.648 [2024-12-15 05:54:27.246615] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:14:05.648 [2024-12-15 05:54:27.246620] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:14:05.648 [2024-12-15 05:54:27.246626] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:14:05.648 [2024-12-15 05:54:27.246642] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.246647] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.246651] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9fd540) 00:14:05.648 [2024-12-15 05:54:27.246659] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.648 [2024-12-15 05:54:27.246667] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.246671] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.246675] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9fd540) 00:14:05.648 [2024-12-15 05:54:27.246681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.648 [2024-12-15 05:54:27.246705] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa367a0, cid 4, qid 0 00:14:05.648 [2024-12-15 05:54:27.246713] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36900, cid 5, qid 0 00:14:05.648 [2024-12-15 05:54:27.246786] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.648 [2024-12-15 05:54:27.246793] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.648 [2024-12-15 05:54:27.246797] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.246801] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa367a0) on tqpair=0x9fd540 00:14:05.648 [2024-12-15 05:54:27.246809] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.648 [2024-12-15 05:54:27.246815] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.648 [2024-12-15 05:54:27.246819] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.246823] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36900) on tqpair=0x9fd540 00:14:05.648 [2024-12-15 05:54:27.246834] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.246838] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.246842] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9fd540) 00:14:05.648 [2024-12-15 05:54:27.246850] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.648 [2024-12-15 05:54:27.246867] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36900, cid 5, qid 0 00:14:05.648 [2024-12-15 05:54:27.246931] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.648 [2024-12-15 05:54:27.246940] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.648 [2024-12-15 05:54:27.246944] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.246948] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36900) on tqpair=0x9fd540 00:14:05.648 [2024-12-15 05:54:27.246960] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.246964] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.648 [2024-12-15 05:54:27.246968] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9fd540) 00:14:05.649 [2024-12-15 05:54:27.246991] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.649 [2024-12-15 05:54:27.247009] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36900, cid 5, qid 0 00:14:05.649 [2024-12-15 05:54:27.247063] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.649 [2024-12-15 05:54:27.247070] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.649 [2024-12-15 05:54:27.247074] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.649 [2024-12-15 05:54:27.247078] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36900) on tqpair=0x9fd540 00:14:05.649 [2024-12-15 05:54:27.247089] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.649 [2024-12-15 05:54:27.247093] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.649 [2024-12-15 05:54:27.247097] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9fd540) 00:14:05.649 [2024-12-15 05:54:27.247105] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.649 [2024-12-15 05:54:27.247120] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36900, cid 5, qid 0 00:14:05.649 [2024-12-15 05:54:27.247183] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.649 [2024-12-15 05:54:27.247207] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.649 [2024-12-15 05:54:27.247211] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.649 [2024-12-15 05:54:27.247215] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36900) on tqpair=0x9fd540 00:14:05.649 [2024-12-15 05:54:27.247229] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.649 [2024-12-15 05:54:27.247235] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.649 [2024-12-15 05:54:27.247239] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9fd540) 00:14:05.649 [2024-12-15 05:54:27.247246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.649 [2024-12-15 05:54:27.247254] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.649 [2024-12-15 05:54:27.247259] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.649 [2024-12-15 05:54:27.247263] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9fd540) 00:14:05.649 [2024-12-15 05:54:27.247270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.649 [2024-12-15 05:54:27.247277] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.649 [2024-12-15 05:54:27.247282] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.649 [2024-12-15 05:54:27.247286] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x9fd540) 00:14:05.649 [2024-12-15 05:54:27.247293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.649 [2024-12-15 05:54:27.247300] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.649 [2024-12-15 05:54:27.247305] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.649 [2024-12-15 05:54:27.247309] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x9fd540) 00:14:05.649 [2024-12-15 05:54:27.247316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.649 [2024-12-15 05:54:27.247335] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36900, cid 5, qid 0 00:14:05.649 [2024-12-15 05:54:27.247343] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa367a0, cid 4, qid 0 00:14:05.649 [2024-12-15 05:54:27.247348] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36a60, cid 6, qid 0 00:14:05.649 [2024-12-15 05:54:27.247353] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36bc0, cid 7, qid 0 00:14:05.649 [2024-12-15 05:54:27.247492] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.649 [2024-12-15 05:54:27.247499] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.649 [2024-12-15 05:54:27.247503] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.649 [2024-12-15 05:54:27.247522] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9fd540): datao=0, datal=8192, cccid=5 00:14:05.649 [2024-12-15 05:54:27.247527] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa36900) on tqpair(0x9fd540): expected_datao=0, payload_size=8192 00:14:05.649 [2024-12-15 05:54:27.247544] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.649 [2024-12-15 05:54:27.247549] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.649 [2024-12-15 05:54:27.247555] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.649 [2024-12-15 05:54:27.247561] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.649 [2024-12-15 05:54:27.247565] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.649 [2024-12-15 05:54:27.247568] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9fd540): datao=0, datal=512, cccid=4 00:14:05.649 [2024-12-15 05:54:27.247573] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa367a0) on tqpair(0x9fd540): expected_datao=0, payload_size=512 00:14:05.649 [2024-12-15 05:54:27.247581] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.649 [2024-12-15 05:54:27.247584] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.649 [2024-12-15 05:54:27.247590] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.649 [2024-12-15 05:54:27.247608] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.649 [2024-12-15 05:54:27.247612] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.649 [2024-12-15 05:54:27.247616] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9fd540): datao=0, datal=512, cccid=6 00:14:05.649 [2024-12-15 05:54:27.247620] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa36a60) on tqpair(0x9fd540): expected_datao=0, payload_size=512 00:14:05.649 [2024-12-15 05:54:27.247628] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.649 [2024-12-15 05:54:27.247632] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.649 [2024-12-15 05:54:27.247638] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.649 [2024-12-15 05:54:27.247644] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.649 [2024-12-15 05:54:27.247648] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.649 [2024-12-15 05:54:27.247652] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9fd540): datao=0, datal=4096, cccid=7 00:14:05.649 [2024-12-15 05:54:27.247656] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa36bc0) on tqpair(0x9fd540): expected_datao=0, payload_size=4096 00:14:05.649 [2024-12-15 05:54:27.247664] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.649 [2024-12-15 05:54:27.247668] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.649 [2024-12-15 05:54:27.247677] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.649 [2024-12-15 05:54:27.247683] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.649 [2024-12-15 05:54:27.247687] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.649 [2024-12-15 05:54:27.247691] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36900) on tqpair=0x9fd540 00:14:05.649 [2024-12-15 05:54:27.247708] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.649 [2024-12-15 05:54:27.247715] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.649 [2024-12-15 05:54:27.247719] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.649 [2024-12-15 05:54:27.247723] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa367a0) on tqpair=0x9fd540 00:14:05.649 [2024-12-15 05:54:27.247733] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.649 [2024-12-15 05:54:27.247740] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.649 [2024-12-15 05:54:27.247744] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.649 [2024-12-15 05:54:27.247748] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36a60) on tqpair=0x9fd540 00:14:05.649 [2024-12-15 05:54:27.247755] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.649 [2024-12-15 05:54:27.247762] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.649 [2024-12-15 05:54:27.247766] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.649 [2024-12-15 05:54:27.247770] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36bc0) on tqpair=0x9fd540 00:14:05.649 ===================================================== 00:14:05.649 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:05.649 ===================================================== 00:14:05.649 Controller Capabilities/Features 00:14:05.649 ================================ 00:14:05.649 Vendor ID: 8086 00:14:05.649 Subsystem Vendor ID: 8086 00:14:05.649 Serial Number: SPDK00000000000001 00:14:05.649 Model Number: SPDK bdev Controller 00:14:05.649 Firmware Version: 24.01.1 00:14:05.649 Recommended Arb Burst: 6 00:14:05.649 IEEE OUI Identifier: e4 d2 5c 00:14:05.649 Multi-path I/O 00:14:05.649 May have multiple subsystem ports: Yes 00:14:05.649 May have multiple controllers: Yes 00:14:05.649 Associated with SR-IOV VF: No 00:14:05.649 Max Data Transfer Size: 131072 00:14:05.649 Max Number of Namespaces: 32 00:14:05.649 Max Number of I/O Queues: 127 00:14:05.649 NVMe Specification Version (VS): 1.3 00:14:05.649 NVMe Specification Version (Identify): 1.3 00:14:05.649 Maximum Queue Entries: 128 00:14:05.649 Contiguous Queues Required: Yes 00:14:05.649 Arbitration Mechanisms Supported 00:14:05.649 Weighted Round Robin: Not Supported 00:14:05.649 Vendor Specific: Not Supported 00:14:05.649 Reset Timeout: 15000 ms 00:14:05.649 Doorbell Stride: 4 bytes 00:14:05.649 NVM Subsystem Reset: Not Supported 00:14:05.649 Command Sets Supported 00:14:05.649 NVM Command Set: Supported 00:14:05.649 Boot Partition: Not Supported 00:14:05.649 Memory Page Size Minimum: 4096 bytes 00:14:05.649 Memory Page Size Maximum: 4096 bytes 00:14:05.649 Persistent Memory Region: Not Supported 00:14:05.649 Optional Asynchronous Events Supported 00:14:05.649 Namespace Attribute Notices: Supported 00:14:05.649 Firmware Activation Notices: Not Supported 00:14:05.649 ANA Change Notices: Not Supported 00:14:05.649 PLE Aggregate Log Change Notices: Not Supported 00:14:05.649 LBA Status Info Alert Notices: Not Supported 00:14:05.649 EGE Aggregate Log Change Notices: Not Supported 00:14:05.649 Normal NVM Subsystem Shutdown event: Not Supported 00:14:05.649 Zone Descriptor Change Notices: Not Supported 00:14:05.649 Discovery Log Change Notices: Not Supported 00:14:05.649 Controller Attributes 00:14:05.650 128-bit Host Identifier: Supported 00:14:05.650 Non-Operational Permissive Mode: Not Supported 00:14:05.650 NVM Sets: Not Supported 00:14:05.650 Read Recovery Levels: Not Supported 00:14:05.650 Endurance Groups: Not Supported 00:14:05.650 Predictable Latency Mode: Not Supported 00:14:05.650 Traffic Based Keep ALive: Not Supported 00:14:05.650 Namespace Granularity: Not Supported 00:14:05.650 SQ Associations: Not Supported 00:14:05.650 UUID List: Not Supported 00:14:05.650 Multi-Domain Subsystem: Not Supported 00:14:05.650 Fixed Capacity Management: Not Supported 00:14:05.650 Variable Capacity Management: Not Supported 00:14:05.650 Delete Endurance Group: Not Supported 00:14:05.650 Delete NVM Set: Not Supported 00:14:05.650 Extended LBA Formats Supported: Not Supported 00:14:05.650 Flexible Data Placement Supported: Not Supported 00:14:05.650 00:14:05.650 Controller Memory Buffer Support 00:14:05.650 ================================ 00:14:05.650 Supported: No 00:14:05.650 00:14:05.650 Persistent Memory Region Support 00:14:05.650 ================================ 00:14:05.650 Supported: No 00:14:05.650 00:14:05.650 Admin Command Set Attributes 00:14:05.650 ============================ 00:14:05.650 Security Send/Receive: Not Supported 00:14:05.650 Format NVM: Not Supported 00:14:05.650 Firmware Activate/Download: Not Supported 00:14:05.650 Namespace Management: Not Supported 00:14:05.650 Device Self-Test: Not Supported 00:14:05.650 Directives: Not Supported 00:14:05.650 NVMe-MI: Not Supported 00:14:05.650 Virtualization Management: Not Supported 00:14:05.650 Doorbell Buffer Config: Not Supported 00:14:05.650 Get LBA Status Capability: Not Supported 00:14:05.650 Command & Feature Lockdown Capability: Not Supported 00:14:05.650 Abort Command Limit: 4 00:14:05.650 Async Event Request Limit: 4 00:14:05.650 Number of Firmware Slots: N/A 00:14:05.650 Firmware Slot 1 Read-Only: N/A 00:14:05.650 Firmware Activation Without Reset: N/A 00:14:05.650 Multiple Update Detection Support: N/A 00:14:05.650 Firmware Update Granularity: No Information Provided 00:14:05.650 Per-Namespace SMART Log: No 00:14:05.650 Asymmetric Namespace Access Log Page: Not Supported 00:14:05.650 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:05.650 Command Effects Log Page: Supported 00:14:05.650 Get Log Page Extended Data: Supported 00:14:05.650 Telemetry Log Pages: Not Supported 00:14:05.650 Persistent Event Log Pages: Not Supported 00:14:05.650 Supported Log Pages Log Page: May Support 00:14:05.650 Commands Supported & Effects Log Page: Not Supported 00:14:05.650 Feature Identifiers & Effects Log Page:May Support 00:14:05.650 NVMe-MI Commands & Effects Log Page: May Support 00:14:05.650 Data Area 4 for Telemetry Log: Not Supported 00:14:05.650 Error Log Page Entries Supported: 128 00:14:05.650 Keep Alive: Supported 00:14:05.650 Keep Alive Granularity: 10000 ms 00:14:05.650 00:14:05.650 NVM Command Set Attributes 00:14:05.650 ========================== 00:14:05.650 Submission Queue Entry Size 00:14:05.650 Max: 64 00:14:05.650 Min: 64 00:14:05.650 Completion Queue Entry Size 00:14:05.650 Max: 16 00:14:05.650 Min: 16 00:14:05.650 Number of Namespaces: 32 00:14:05.650 Compare Command: Supported 00:14:05.650 Write Uncorrectable Command: Not Supported 00:14:05.650 Dataset Management Command: Supported 00:14:05.650 Write Zeroes Command: Supported 00:14:05.650 Set Features Save Field: Not Supported 00:14:05.650 Reservations: Supported 00:14:05.650 Timestamp: Not Supported 00:14:05.650 Copy: Supported 00:14:05.650 Volatile Write Cache: Present 00:14:05.650 Atomic Write Unit (Normal): 1 00:14:05.650 Atomic Write Unit (PFail): 1 00:14:05.650 Atomic Compare & Write Unit: 1 00:14:05.650 Fused Compare & Write: Supported 00:14:05.650 Scatter-Gather List 00:14:05.650 SGL Command Set: Supported 00:14:05.650 SGL Keyed: Supported 00:14:05.650 SGL Bit Bucket Descriptor: Not Supported 00:14:05.650 SGL Metadata Pointer: Not Supported 00:14:05.650 Oversized SGL: Not Supported 00:14:05.650 SGL Metadata Address: Not Supported 00:14:05.650 SGL Offset: Supported 00:14:05.650 Transport SGL Data Block: Not Supported 00:14:05.650 Replay Protected Memory Block: Not Supported 00:14:05.650 00:14:05.650 Firmware Slot Information 00:14:05.650 ========================= 00:14:05.650 Active slot: 1 00:14:05.650 Slot 1 Firmware Revision: 24.01.1 00:14:05.650 00:14:05.650 00:14:05.650 Commands Supported and Effects 00:14:05.650 ============================== 00:14:05.650 Admin Commands 00:14:05.650 -------------- 00:14:05.650 Get Log Page (02h): Supported 00:14:05.650 Identify (06h): Supported 00:14:05.650 Abort (08h): Supported 00:14:05.650 Set Features (09h): Supported 00:14:05.650 Get Features (0Ah): Supported 00:14:05.650 Asynchronous Event Request (0Ch): Supported 00:14:05.650 Keep Alive (18h): Supported 00:14:05.650 I/O Commands 00:14:05.650 ------------ 00:14:05.650 Flush (00h): Supported LBA-Change 00:14:05.650 Write (01h): Supported LBA-Change 00:14:05.650 Read (02h): Supported 00:14:05.650 Compare (05h): Supported 00:14:05.650 Write Zeroes (08h): Supported LBA-Change 00:14:05.650 Dataset Management (09h): Supported LBA-Change 00:14:05.650 Copy (19h): Supported LBA-Change 00:14:05.650 Unknown (79h): Supported LBA-Change 00:14:05.650 Unknown (7Ah): Supported 00:14:05.650 00:14:05.650 Error Log 00:14:05.650 ========= 00:14:05.650 00:14:05.650 Arbitration 00:14:05.650 =========== 00:14:05.650 Arbitration Burst: 1 00:14:05.650 00:14:05.650 Power Management 00:14:05.650 ================ 00:14:05.650 Number of Power States: 1 00:14:05.650 Current Power State: Power State #0 00:14:05.650 Power State #0: 00:14:05.650 Max Power: 0.00 W 00:14:05.650 Non-Operational State: Operational 00:14:05.650 Entry Latency: Not Reported 00:14:05.650 Exit Latency: Not Reported 00:14:05.650 Relative Read Throughput: 0 00:14:05.650 Relative Read Latency: 0 00:14:05.650 Relative Write Throughput: 0 00:14:05.650 Relative Write Latency: 0 00:14:05.650 Idle Power: Not Reported 00:14:05.650 Active Power: Not Reported 00:14:05.650 Non-Operational Permissive Mode: Not Supported 00:14:05.650 00:14:05.650 Health Information 00:14:05.650 ================== 00:14:05.650 Critical Warnings: 00:14:05.650 Available Spare Space: OK 00:14:05.650 Temperature: OK 00:14:05.650 Device Reliability: OK 00:14:05.650 Read Only: No 00:14:05.650 Volatile Memory Backup: OK 00:14:05.650 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:05.650 Temperature Threshold: [2024-12-15 05:54:27.247887] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.650 [2024-12-15 05:54:27.247897] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.650 [2024-12-15 05:54:27.247901] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x9fd540) 00:14:05.650 [2024-12-15 05:54:27.247909] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.650 [2024-12-15 05:54:27.252014] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36bc0, cid 7, qid 0 00:14:05.650 [2024-12-15 05:54:27.252065] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.650 [2024-12-15 05:54:27.252073] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.650 [2024-12-15 05:54:27.252077] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.650 [2024-12-15 05:54:27.252081] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36bc0) on tqpair=0x9fd540 00:14:05.650 [2024-12-15 05:54:27.252117] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:14:05.650 [2024-12-15 05:54:27.252131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.650 [2024-12-15 05:54:27.252155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.650 [2024-12-15 05:54:27.252161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.650 [2024-12-15 05:54:27.252167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.650 [2024-12-15 05:54:27.252177] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.650 [2024-12-15 05:54:27.252182] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.650 [2024-12-15 05:54:27.252185] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.650 [2024-12-15 05:54:27.252194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.650 [2024-12-15 05:54:27.252216] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.650 [2024-12-15 05:54:27.252268] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.650 [2024-12-15 05:54:27.252276] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.650 [2024-12-15 05:54:27.252280] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.650 [2024-12-15 05:54:27.252284] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.650 [2024-12-15 05:54:27.252291] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.650 [2024-12-15 05:54:27.252296] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.650 [2024-12-15 05:54:27.252300] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.650 [2024-12-15 05:54:27.252307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.651 [2024-12-15 05:54:27.252327] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.651 [2024-12-15 05:54:27.252393] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.651 [2024-12-15 05:54:27.252400] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.651 [2024-12-15 05:54:27.252403] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.651 [2024-12-15 05:54:27.252407] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.651 [2024-12-15 05:54:27.252413] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:14:05.651 [2024-12-15 05:54:27.252418] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:14:05.651 [2024-12-15 05:54:27.252428] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.651 [2024-12-15 05:54:27.252432] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.651 [2024-12-15 05:54:27.252436] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.651 [2024-12-15 05:54:27.252444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.651 [2024-12-15 05:54:27.252460] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.651 [2024-12-15 05:54:27.252511] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.651 [2024-12-15 05:54:27.252523] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.651 [2024-12-15 05:54:27.252528] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.651 [2024-12-15 05:54:27.252532] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.651 [2024-12-15 05:54:27.252544] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.651 [2024-12-15 05:54:27.252549] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.651 [2024-12-15 05:54:27.252553] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.651 [2024-12-15 05:54:27.252560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.651 [2024-12-15 05:54:27.252577] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.651 [2024-12-15 05:54:27.252647] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.651 [2024-12-15 05:54:27.252654] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.651 [2024-12-15 05:54:27.252658] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.651 [2024-12-15 05:54:27.252663] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.651 [2024-12-15 05:54:27.252673] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.651 [2024-12-15 05:54:27.252678] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.651 [2024-12-15 05:54:27.252682] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.651 [2024-12-15 05:54:27.252690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.651 [2024-12-15 05:54:27.252706] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.651 [2024-12-15 05:54:27.252758] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.651 [2024-12-15 05:54:27.252777] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.651 [2024-12-15 05:54:27.252782] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.651 [2024-12-15 05:54:27.252786] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.651 [2024-12-15 05:54:27.252798] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.651 [2024-12-15 05:54:27.252803] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.651 [2024-12-15 05:54:27.252807] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.651 [2024-12-15 05:54:27.252815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.651 [2024-12-15 05:54:27.252833] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.651 [2024-12-15 05:54:27.252887] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.651 [2024-12-15 05:54:27.252895] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.651 [2024-12-15 05:54:27.252899] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.651 [2024-12-15 05:54:27.252903] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.651 [2024-12-15 05:54:27.252915] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.651 [2024-12-15 05:54:27.252920] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.651 [2024-12-15 05:54:27.252924] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.651 [2024-12-15 05:54:27.252932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.651 [2024-12-15 05:54:27.252951] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.651 [2024-12-15 05:54:27.253011] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.651 [2024-12-15 05:54:27.253018] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.651 [2024-12-15 05:54:27.253021] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.651 [2024-12-15 05:54:27.253026] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.651 [2024-12-15 05:54:27.253036] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.651 [2024-12-15 05:54:27.253041] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.651 [2024-12-15 05:54:27.253045] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.651 [2024-12-15 05:54:27.253053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.651 [2024-12-15 05:54:27.253069] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.651 [2024-12-15 05:54:27.253121] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.651 [2024-12-15 05:54:27.253128] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.651 [2024-12-15 05:54:27.253132] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.651 [2024-12-15 05:54:27.253136] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.651 [2024-12-15 05:54:27.253147] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.651 [2024-12-15 05:54:27.253151] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.651 [2024-12-15 05:54:27.253155] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.651 [2024-12-15 05:54:27.253163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.651 [2024-12-15 05:54:27.253179] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.651 [2024-12-15 05:54:27.253228] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.651 [2024-12-15 05:54:27.253235] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.651 [2024-12-15 05:54:27.253239] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.651 [2024-12-15 05:54:27.253243] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.651 [2024-12-15 05:54:27.253254] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.651 [2024-12-15 05:54:27.253258] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.651 [2024-12-15 05:54:27.253263] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.651 [2024-12-15 05:54:27.253271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.651 [2024-12-15 05:54:27.253287] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.651 [2024-12-15 05:54:27.253336] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.651 [2024-12-15 05:54:27.253343] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.651 [2024-12-15 05:54:27.253347] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.651 [2024-12-15 05:54:27.253351] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.651 [2024-12-15 05:54:27.253362] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.651 [2024-12-15 05:54:27.253366] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.651 [2024-12-15 05:54:27.253370] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.651 [2024-12-15 05:54:27.253378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.651 [2024-12-15 05:54:27.253394] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.651 [2024-12-15 05:54:27.253443] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.651 [2024-12-15 05:54:27.253450] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.651 [2024-12-15 05:54:27.253454] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.651 [2024-12-15 05:54:27.253458] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.652 [2024-12-15 05:54:27.253469] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.253474] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.253478] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.652 [2024-12-15 05:54:27.253485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.652 [2024-12-15 05:54:27.253501] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.652 [2024-12-15 05:54:27.253551] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.652 [2024-12-15 05:54:27.253558] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.652 [2024-12-15 05:54:27.253562] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.253566] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.652 [2024-12-15 05:54:27.253577] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.253582] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.253586] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.652 [2024-12-15 05:54:27.253593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.652 [2024-12-15 05:54:27.253610] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.652 [2024-12-15 05:54:27.253666] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.652 [2024-12-15 05:54:27.253677] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.652 [2024-12-15 05:54:27.253681] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.253686] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.652 [2024-12-15 05:54:27.253697] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.253702] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.253706] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.652 [2024-12-15 05:54:27.253713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.652 [2024-12-15 05:54:27.253730] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.652 [2024-12-15 05:54:27.253779] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.652 [2024-12-15 05:54:27.253786] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.652 [2024-12-15 05:54:27.253790] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.253794] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.652 [2024-12-15 05:54:27.253805] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.253809] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.253813] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.652 [2024-12-15 05:54:27.253821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.652 [2024-12-15 05:54:27.253837] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.652 [2024-12-15 05:54:27.253896] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.652 [2024-12-15 05:54:27.253908] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.652 [2024-12-15 05:54:27.253912] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.253916] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.652 [2024-12-15 05:54:27.253928] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.253933] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.253937] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.652 [2024-12-15 05:54:27.253945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.652 [2024-12-15 05:54:27.253963] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.652 [2024-12-15 05:54:27.254015] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.652 [2024-12-15 05:54:27.254022] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.652 [2024-12-15 05:54:27.254026] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.254030] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.652 [2024-12-15 05:54:27.254041] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.254046] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.254049] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.652 [2024-12-15 05:54:27.254057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.652 [2024-12-15 05:54:27.254074] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.652 [2024-12-15 05:54:27.254119] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.652 [2024-12-15 05:54:27.254130] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.652 [2024-12-15 05:54:27.254150] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.254154] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.652 [2024-12-15 05:54:27.254164] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.254169] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.254173] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.652 [2024-12-15 05:54:27.254181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.652 [2024-12-15 05:54:27.254197] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.652 [2024-12-15 05:54:27.254244] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.652 [2024-12-15 05:54:27.254251] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.652 [2024-12-15 05:54:27.254255] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.254259] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.652 [2024-12-15 05:54:27.254269] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.254274] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.254277] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.652 [2024-12-15 05:54:27.254285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.652 [2024-12-15 05:54:27.254301] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.652 [2024-12-15 05:54:27.254347] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.652 [2024-12-15 05:54:27.254354] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.652 [2024-12-15 05:54:27.254358] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.254362] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.652 [2024-12-15 05:54:27.254372] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.254377] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.254380] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.652 [2024-12-15 05:54:27.254388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.652 [2024-12-15 05:54:27.254404] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.652 [2024-12-15 05:54:27.254454] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.652 [2024-12-15 05:54:27.254464] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.652 [2024-12-15 05:54:27.254468] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.254473] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.652 [2024-12-15 05:54:27.254483] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.254488] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.254492] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.652 [2024-12-15 05:54:27.254500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.652 [2024-12-15 05:54:27.254516] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.652 [2024-12-15 05:54:27.254564] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.652 [2024-12-15 05:54:27.254570] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.652 [2024-12-15 05:54:27.254574] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.254578] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.652 [2024-12-15 05:54:27.254588] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.254593] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.254613] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.652 [2024-12-15 05:54:27.254620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.652 [2024-12-15 05:54:27.254637] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.652 [2024-12-15 05:54:27.254683] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.652 [2024-12-15 05:54:27.254694] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.652 [2024-12-15 05:54:27.254698] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.254702] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.652 [2024-12-15 05:54:27.254713] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.254718] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.254722] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.652 [2024-12-15 05:54:27.254730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.652 [2024-12-15 05:54:27.254747] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.652 [2024-12-15 05:54:27.254796] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.652 [2024-12-15 05:54:27.254807] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.652 [2024-12-15 05:54:27.254812] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.652 [2024-12-15 05:54:27.254816] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.652 [2024-12-15 05:54:27.254827] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.254832] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.254836] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.653 [2024-12-15 05:54:27.254844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.653 [2024-12-15 05:54:27.254861] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.653 [2024-12-15 05:54:27.254917] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.653 [2024-12-15 05:54:27.254927] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.653 [2024-12-15 05:54:27.254931] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.254935] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.653 [2024-12-15 05:54:27.254946] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.254951] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.254955] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.653 [2024-12-15 05:54:27.254963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.653 [2024-12-15 05:54:27.254982] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.653 [2024-12-15 05:54:27.255066] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.653 [2024-12-15 05:54:27.255073] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.653 [2024-12-15 05:54:27.255077] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.255081] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.653 [2024-12-15 05:54:27.255092] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.255097] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.255101] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.653 [2024-12-15 05:54:27.255109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.653 [2024-12-15 05:54:27.255125] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.653 [2024-12-15 05:54:27.255172] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.653 [2024-12-15 05:54:27.255189] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.653 [2024-12-15 05:54:27.255193] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.255198] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.653 [2024-12-15 05:54:27.255209] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.255214] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.255218] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.653 [2024-12-15 05:54:27.255226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.653 [2024-12-15 05:54:27.255244] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.653 [2024-12-15 05:54:27.255295] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.653 [2024-12-15 05:54:27.255302] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.653 [2024-12-15 05:54:27.255306] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.255310] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.653 [2024-12-15 05:54:27.255321] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.255325] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.255329] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.653 [2024-12-15 05:54:27.255337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.653 [2024-12-15 05:54:27.255353] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.653 [2024-12-15 05:54:27.255402] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.653 [2024-12-15 05:54:27.255409] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.653 [2024-12-15 05:54:27.255413] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.255417] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.653 [2024-12-15 05:54:27.255428] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.255432] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.255436] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.653 [2024-12-15 05:54:27.255444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.653 [2024-12-15 05:54:27.255460] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.653 [2024-12-15 05:54:27.255509] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.653 [2024-12-15 05:54:27.255516] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.653 [2024-12-15 05:54:27.255520] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.255524] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.653 [2024-12-15 05:54:27.255535] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.255540] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.255543] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.653 [2024-12-15 05:54:27.255551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.653 [2024-12-15 05:54:27.255567] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.653 [2024-12-15 05:54:27.255619] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.653 [2024-12-15 05:54:27.255626] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.653 [2024-12-15 05:54:27.255630] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.255634] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.653 [2024-12-15 05:54:27.255644] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.255649] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.255653] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.653 [2024-12-15 05:54:27.255661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.653 [2024-12-15 05:54:27.255677] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.653 [2024-12-15 05:54:27.255721] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.653 [2024-12-15 05:54:27.255728] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.653 [2024-12-15 05:54:27.255731] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.255735] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.653 [2024-12-15 05:54:27.255746] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.255751] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.255755] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.653 [2024-12-15 05:54:27.255762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.653 [2024-12-15 05:54:27.255779] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.653 [2024-12-15 05:54:27.255830] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.653 [2024-12-15 05:54:27.255837] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.653 [2024-12-15 05:54:27.255841] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.255845] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.653 [2024-12-15 05:54:27.255856] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.255860] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.255864] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9fd540) 00:14:05.653 [2024-12-15 05:54:27.258940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.653 [2024-12-15 05:54:27.258976] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36640, cid 3, qid 0 00:14:05.653 [2024-12-15 05:54:27.259025] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.653 [2024-12-15 05:54:27.259033] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.653 [2024-12-15 05:54:27.259037] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.653 [2024-12-15 05:54:27.259042] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36640) on tqpair=0x9fd540 00:14:05.653 [2024-12-15 05:54:27.259051] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:14:05.653 0 Kelvin (-273 Celsius) 00:14:05.653 Available Spare: 0% 00:14:05.653 Available Spare Threshold: 0% 00:14:05.653 Life Percentage Used: 0% 00:14:05.653 Data Units Read: 0 00:14:05.653 Data Units Written: 0 00:14:05.653 Host Read Commands: 0 00:14:05.653 Host Write Commands: 0 00:14:05.653 Controller Busy Time: 0 minutes 00:14:05.653 Power Cycles: 0 00:14:05.653 Power On Hours: 0 hours 00:14:05.653 Unsafe Shutdowns: 0 00:14:05.653 Unrecoverable Media Errors: 0 00:14:05.653 Lifetime Error Log Entries: 0 00:14:05.653 Warning Temperature Time: 0 minutes 00:14:05.653 Critical Temperature Time: 0 minutes 00:14:05.653 00:14:05.653 Number of Queues 00:14:05.653 ================ 00:14:05.653 Number of I/O Submission Queues: 127 00:14:05.653 Number of I/O Completion Queues: 127 00:14:05.653 00:14:05.653 Active Namespaces 00:14:05.653 ================= 00:14:05.653 Namespace ID:1 00:14:05.653 Error Recovery Timeout: Unlimited 00:14:05.653 Command Set Identifier: NVM (00h) 00:14:05.653 Deallocate: Supported 00:14:05.653 Deallocated/Unwritten Error: Not Supported 00:14:05.653 Deallocated Read Value: Unknown 00:14:05.653 Deallocate in Write Zeroes: Not Supported 00:14:05.653 Deallocated Guard Field: 0xFFFF 00:14:05.653 Flush: Supported 00:14:05.653 Reservation: Supported 00:14:05.653 Namespace Sharing Capabilities: Multiple Controllers 00:14:05.653 Size (in LBAs): 131072 (0GiB) 00:14:05.653 Capacity (in LBAs): 131072 (0GiB) 00:14:05.653 Utilization (in LBAs): 131072 (0GiB) 00:14:05.654 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:05.654 EUI64: ABCDEF0123456789 00:14:05.654 UUID: fde902f7-fa2d-4e78-816f-983d9fad6aa8 00:14:05.654 Thin Provisioning: Not Supported 00:14:05.654 Per-NS Atomic Units: Yes 00:14:05.654 Atomic Boundary Size (Normal): 0 00:14:05.654 Atomic Boundary Size (PFail): 0 00:14:05.654 Atomic Boundary Offset: 0 00:14:05.654 Maximum Single Source Range Length: 65535 00:14:05.654 Maximum Copy Length: 65535 00:14:05.654 Maximum Source Range Count: 1 00:14:05.654 NGUID/EUI64 Never Reused: No 00:14:05.654 Namespace Write Protected: No 00:14:05.654 Number of LBA Formats: 1 00:14:05.654 Current LBA Format: LBA Format #00 00:14:05.654 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:05.654 00:14:05.654 05:54:27 -- host/identify.sh@51 -- # sync 00:14:05.913 05:54:27 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:05.913 05:54:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.913 05:54:27 -- common/autotest_common.sh@10 -- # set +x 00:14:05.913 05:54:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.913 05:54:27 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:05.913 05:54:27 -- host/identify.sh@56 -- # nvmftestfini 00:14:05.913 05:54:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:05.913 05:54:27 -- nvmf/common.sh@116 -- # sync 00:14:05.913 05:54:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:05.913 05:54:27 -- nvmf/common.sh@119 -- # set +e 00:14:05.913 05:54:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:05.913 05:54:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:05.913 rmmod nvme_tcp 00:14:05.913 rmmod nvme_fabrics 00:14:05.913 rmmod nvme_keyring 00:14:05.913 05:54:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:05.913 05:54:27 -- nvmf/common.sh@123 -- # set -e 00:14:05.913 05:54:27 -- nvmf/common.sh@124 -- # return 0 00:14:05.913 05:54:27 -- nvmf/common.sh@477 -- # '[' -n 80003 ']' 00:14:05.913 05:54:27 -- nvmf/common.sh@478 -- # killprocess 80003 00:14:05.913 05:54:27 -- common/autotest_common.sh@936 -- # '[' -z 80003 ']' 00:14:05.913 05:54:27 -- common/autotest_common.sh@940 -- # kill -0 80003 00:14:05.913 05:54:27 -- common/autotest_common.sh@941 -- # uname 00:14:05.913 05:54:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:05.913 05:54:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80003 00:14:05.913 05:54:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:05.913 05:54:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:05.913 05:54:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80003' 00:14:05.913 killing process with pid 80003 00:14:05.913 05:54:27 -- common/autotest_common.sh@955 -- # kill 80003 00:14:05.913 [2024-12-15 05:54:27.427111] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:14:05.913 05:54:27 -- common/autotest_common.sh@960 -- # wait 80003 00:14:06.172 05:54:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:06.172 05:54:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:06.172 05:54:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:06.172 05:54:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:06.172 05:54:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:06.172 05:54:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.172 05:54:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:06.172 05:54:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.172 05:54:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:06.172 ************************************ 00:14:06.172 END TEST nvmf_identify 00:14:06.172 ************************************ 00:14:06.172 00:14:06.172 real 0m2.500s 00:14:06.172 user 0m7.084s 00:14:06.172 sys 0m0.593s 00:14:06.172 05:54:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:06.172 05:54:27 -- common/autotest_common.sh@10 -- # set +x 00:14:06.172 05:54:27 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:06.172 05:54:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:06.172 05:54:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:06.172 05:54:27 -- common/autotest_common.sh@10 -- # set +x 00:14:06.172 ************************************ 00:14:06.172 START TEST nvmf_perf 00:14:06.172 ************************************ 00:14:06.172 05:54:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:06.172 * Looking for test storage... 00:14:06.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:06.172 05:54:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:06.172 05:54:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:06.172 05:54:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:06.431 05:54:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:06.431 05:54:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:06.431 05:54:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:06.431 05:54:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:06.431 05:54:27 -- scripts/common.sh@335 -- # IFS=.-: 00:14:06.431 05:54:27 -- scripts/common.sh@335 -- # read -ra ver1 00:14:06.431 05:54:27 -- scripts/common.sh@336 -- # IFS=.-: 00:14:06.431 05:54:27 -- scripts/common.sh@336 -- # read -ra ver2 00:14:06.431 05:54:27 -- scripts/common.sh@337 -- # local 'op=<' 00:14:06.431 05:54:27 -- scripts/common.sh@339 -- # ver1_l=2 00:14:06.431 05:54:27 -- scripts/common.sh@340 -- # ver2_l=1 00:14:06.431 05:54:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:06.431 05:54:27 -- scripts/common.sh@343 -- # case "$op" in 00:14:06.431 05:54:27 -- scripts/common.sh@344 -- # : 1 00:14:06.431 05:54:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:06.431 05:54:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:06.431 05:54:27 -- scripts/common.sh@364 -- # decimal 1 00:14:06.431 05:54:27 -- scripts/common.sh@352 -- # local d=1 00:14:06.431 05:54:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:06.431 05:54:27 -- scripts/common.sh@354 -- # echo 1 00:14:06.431 05:54:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:06.431 05:54:27 -- scripts/common.sh@365 -- # decimal 2 00:14:06.431 05:54:27 -- scripts/common.sh@352 -- # local d=2 00:14:06.431 05:54:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:06.431 05:54:27 -- scripts/common.sh@354 -- # echo 2 00:14:06.431 05:54:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:06.431 05:54:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:06.431 05:54:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:06.431 05:54:27 -- scripts/common.sh@367 -- # return 0 00:14:06.431 05:54:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:06.431 05:54:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:06.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.431 --rc genhtml_branch_coverage=1 00:14:06.431 --rc genhtml_function_coverage=1 00:14:06.431 --rc genhtml_legend=1 00:14:06.431 --rc geninfo_all_blocks=1 00:14:06.431 --rc geninfo_unexecuted_blocks=1 00:14:06.431 00:14:06.431 ' 00:14:06.431 05:54:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:06.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.431 --rc genhtml_branch_coverage=1 00:14:06.431 --rc genhtml_function_coverage=1 00:14:06.431 --rc genhtml_legend=1 00:14:06.431 --rc geninfo_all_blocks=1 00:14:06.431 --rc geninfo_unexecuted_blocks=1 00:14:06.431 00:14:06.431 ' 00:14:06.431 05:54:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:06.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.431 --rc genhtml_branch_coverage=1 00:14:06.431 --rc genhtml_function_coverage=1 00:14:06.431 --rc genhtml_legend=1 00:14:06.431 --rc geninfo_all_blocks=1 00:14:06.431 --rc geninfo_unexecuted_blocks=1 00:14:06.431 00:14:06.431 ' 00:14:06.431 05:54:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:06.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.431 --rc genhtml_branch_coverage=1 00:14:06.431 --rc genhtml_function_coverage=1 00:14:06.431 --rc genhtml_legend=1 00:14:06.431 --rc geninfo_all_blocks=1 00:14:06.431 --rc geninfo_unexecuted_blocks=1 00:14:06.431 00:14:06.431 ' 00:14:06.431 05:54:27 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:06.431 05:54:27 -- nvmf/common.sh@7 -- # uname -s 00:14:06.431 05:54:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.431 05:54:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.431 05:54:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.431 05:54:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.431 05:54:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:06.431 05:54:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:06.431 05:54:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:06.431 05:54:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:06.431 05:54:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:06.431 05:54:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:06.431 05:54:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:14:06.431 05:54:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:14:06.431 05:54:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:06.431 05:54:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:06.431 05:54:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:06.431 05:54:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:06.431 05:54:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.431 05:54:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.431 05:54:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.431 05:54:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.431 05:54:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.431 05:54:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.431 05:54:27 -- paths/export.sh@5 -- # export PATH 00:14:06.431 05:54:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.431 05:54:27 -- nvmf/common.sh@46 -- # : 0 00:14:06.431 05:54:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:06.431 05:54:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:06.431 05:54:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:06.431 05:54:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:06.431 05:54:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:06.431 05:54:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:06.431 05:54:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:06.431 05:54:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:06.432 05:54:27 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:06.432 05:54:27 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:06.432 05:54:27 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:06.432 05:54:27 -- host/perf.sh@17 -- # nvmftestinit 00:14:06.432 05:54:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:06.432 05:54:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:06.432 05:54:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:06.432 05:54:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:06.432 05:54:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:06.432 05:54:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.432 05:54:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:06.432 05:54:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.432 05:54:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:06.432 05:54:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:06.432 05:54:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:06.432 05:54:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:06.432 05:54:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:06.432 05:54:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:06.432 05:54:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:06.432 05:54:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:06.432 05:54:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:06.432 05:54:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:06.432 05:54:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:06.432 05:54:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:06.432 05:54:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:06.432 05:54:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:06.432 05:54:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:06.432 05:54:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:06.432 05:54:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:06.432 05:54:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:06.432 05:54:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:06.432 05:54:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:06.432 Cannot find device "nvmf_tgt_br" 00:14:06.432 05:54:27 -- nvmf/common.sh@154 -- # true 00:14:06.432 05:54:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:06.432 Cannot find device "nvmf_tgt_br2" 00:14:06.432 05:54:27 -- nvmf/common.sh@155 -- # true 00:14:06.432 05:54:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:06.432 05:54:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:06.432 Cannot find device "nvmf_tgt_br" 00:14:06.432 05:54:27 -- nvmf/common.sh@157 -- # true 00:14:06.432 05:54:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:06.432 Cannot find device "nvmf_tgt_br2" 00:14:06.432 05:54:27 -- nvmf/common.sh@158 -- # true 00:14:06.432 05:54:27 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:06.432 05:54:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:06.432 05:54:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:06.432 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:06.432 05:54:28 -- nvmf/common.sh@161 -- # true 00:14:06.432 05:54:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:06.432 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:06.432 05:54:28 -- nvmf/common.sh@162 -- # true 00:14:06.432 05:54:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:06.432 05:54:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:06.432 05:54:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:06.432 05:54:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:06.432 05:54:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:06.432 05:54:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:06.691 05:54:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:06.691 05:54:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:06.691 05:54:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:06.691 05:54:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:06.691 05:54:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:06.691 05:54:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:06.691 05:54:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:06.691 05:54:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:06.691 05:54:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:06.691 05:54:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:06.691 05:54:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:06.691 05:54:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:06.691 05:54:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:06.691 05:54:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:06.691 05:54:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:06.691 05:54:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:06.691 05:54:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:06.691 05:54:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:06.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:06.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:14:06.691 00:14:06.691 --- 10.0.0.2 ping statistics --- 00:14:06.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.691 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:14:06.691 05:54:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:06.691 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:06.691 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:14:06.691 00:14:06.691 --- 10.0.0.3 ping statistics --- 00:14:06.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.691 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:14:06.691 05:54:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:06.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:06.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:06.691 00:14:06.691 --- 10.0.0.1 ping statistics --- 00:14:06.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.691 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:06.691 05:54:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:06.691 05:54:28 -- nvmf/common.sh@421 -- # return 0 00:14:06.691 05:54:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:06.691 05:54:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:06.691 05:54:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:06.691 05:54:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:06.691 05:54:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:06.691 05:54:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:06.691 05:54:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:06.691 05:54:28 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:06.691 05:54:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:06.691 05:54:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:06.691 05:54:28 -- common/autotest_common.sh@10 -- # set +x 00:14:06.691 05:54:28 -- nvmf/common.sh@469 -- # nvmfpid=80217 00:14:06.691 05:54:28 -- nvmf/common.sh@470 -- # waitforlisten 80217 00:14:06.691 05:54:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:06.691 05:54:28 -- common/autotest_common.sh@829 -- # '[' -z 80217 ']' 00:14:06.691 05:54:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.691 05:54:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:06.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.691 05:54:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.691 05:54:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:06.691 05:54:28 -- common/autotest_common.sh@10 -- # set +x 00:14:06.691 [2024-12-15 05:54:28.272227] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:06.691 [2024-12-15 05:54:28.272315] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.950 [2024-12-15 05:54:28.405882] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:06.950 [2024-12-15 05:54:28.438889] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:06.950 [2024-12-15 05:54:28.439097] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.950 [2024-12-15 05:54:28.439110] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.950 [2024-12-15 05:54:28.439118] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.950 [2024-12-15 05:54:28.439480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.950 [2024-12-15 05:54:28.439625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.950 [2024-12-15 05:54:28.439750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:06.950 [2024-12-15 05:54:28.439754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.950 05:54:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:06.950 05:54:28 -- common/autotest_common.sh@862 -- # return 0 00:14:06.950 05:54:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:06.950 05:54:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:06.950 05:54:28 -- common/autotest_common.sh@10 -- # set +x 00:14:06.950 05:54:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.950 05:54:28 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:06.950 05:54:28 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:07.516 05:54:29 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:07.516 05:54:29 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:07.775 05:54:29 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:14:07.775 05:54:29 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:08.033 05:54:29 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:08.033 05:54:29 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:14:08.033 05:54:29 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:08.033 05:54:29 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:08.033 05:54:29 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:08.291 [2024-12-15 05:54:29.778120] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.291 05:54:29 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:08.549 05:54:30 -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:08.549 05:54:30 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:08.933 05:54:30 -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:08.933 05:54:30 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:09.191 05:54:30 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:09.191 [2024-12-15 05:54:30.819604] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:09.450 05:54:30 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:09.450 05:54:31 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:14:09.450 05:54:31 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:14:09.450 05:54:31 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:09.450 05:54:31 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:14:10.826 Initializing NVMe Controllers 00:14:10.826 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:14:10.826 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:14:10.826 Initialization complete. Launching workers. 00:14:10.826 ======================================================== 00:14:10.826 Latency(us) 00:14:10.826 Device Information : IOPS MiB/s Average min max 00:14:10.826 PCIE (0000:00:06.0) NSID 1 from core 0: 23036.64 89.99 1389.22 340.42 8096.43 00:14:10.826 ======================================================== 00:14:10.826 Total : 23036.64 89.99 1389.22 340.42 8096.43 00:14:10.826 00:14:10.826 05:54:32 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:12.204 Initializing NVMe Controllers 00:14:12.204 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:12.204 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:12.204 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:12.204 Initialization complete. Launching workers. 00:14:12.204 ======================================================== 00:14:12.204 Latency(us) 00:14:12.204 Device Information : IOPS MiB/s Average min max 00:14:12.204 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3694.18 14.43 269.54 100.51 7297.60 00:14:12.204 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 122.51 0.48 8218.91 6895.47 15027.75 00:14:12.204 ======================================================== 00:14:12.204 Total : 3816.68 14.91 524.70 100.51 15027.75 00:14:12.204 00:14:12.204 05:54:33 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:13.585 Initializing NVMe Controllers 00:14:13.585 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:13.585 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:13.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:13.586 Initialization complete. Launching workers. 00:14:13.586 ======================================================== 00:14:13.586 Latency(us) 00:14:13.586 Device Information : IOPS MiB/s Average min max 00:14:13.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9065.11 35.41 3534.32 417.02 9493.13 00:14:13.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3903.18 15.25 8253.10 6041.66 16301.43 00:14:13.586 ======================================================== 00:14:13.586 Total : 12968.29 50.66 4954.57 417.02 16301.43 00:14:13.586 00:14:13.586 05:54:34 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:14:13.586 05:54:34 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:16.120 Initializing NVMe Controllers 00:14:16.121 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:16.121 Controller IO queue size 128, less than required. 00:14:16.121 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:16.121 Controller IO queue size 128, less than required. 00:14:16.121 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:16.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:16.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:16.121 Initialization complete. Launching workers. 00:14:16.121 ======================================================== 00:14:16.121 Latency(us) 00:14:16.121 Device Information : IOPS MiB/s Average min max 00:14:16.121 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1812.43 453.11 72224.80 39011.81 169568.70 00:14:16.121 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 639.59 159.90 209877.99 102422.54 338166.89 00:14:16.121 ======================================================== 00:14:16.121 Total : 2452.03 613.01 108130.67 39011.81 338166.89 00:14:16.121 00:14:16.121 05:54:37 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:14:16.121 No valid NVMe controllers or AIO or URING devices found 00:14:16.121 Initializing NVMe Controllers 00:14:16.121 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:16.121 Controller IO queue size 128, less than required. 00:14:16.121 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:16.121 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:14:16.121 Controller IO queue size 128, less than required. 00:14:16.121 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:16.121 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:14:16.121 WARNING: Some requested NVMe devices were skipped 00:14:16.121 05:54:37 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:14:18.653 Initializing NVMe Controllers 00:14:18.653 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:18.653 Controller IO queue size 128, less than required. 00:14:18.653 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:18.653 Controller IO queue size 128, less than required. 00:14:18.653 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:18.653 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:18.653 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:18.653 Initialization complete. Launching workers. 00:14:18.653 00:14:18.653 ==================== 00:14:18.653 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:18.653 TCP transport: 00:14:18.653 polls: 7567 00:14:18.653 idle_polls: 0 00:14:18.653 sock_completions: 7567 00:14:18.653 nvme_completions: 6362 00:14:18.653 submitted_requests: 9708 00:14:18.653 queued_requests: 1 00:14:18.653 00:14:18.653 ==================== 00:14:18.653 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:18.653 TCP transport: 00:14:18.653 polls: 7674 00:14:18.653 idle_polls: 0 00:14:18.653 sock_completions: 7674 00:14:18.653 nvme_completions: 6206 00:14:18.653 submitted_requests: 9480 00:14:18.653 queued_requests: 1 00:14:18.653 ======================================================== 00:14:18.653 Latency(us) 00:14:18.653 Device Information : IOPS MiB/s Average min max 00:14:18.653 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1653.78 413.44 78778.59 35730.48 161342.26 00:14:18.654 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1614.78 403.70 79479.60 37518.25 141763.95 00:14:18.654 ======================================================== 00:14:18.654 Total : 3268.56 817.14 79124.91 35730.48 161342.26 00:14:18.654 00:14:18.654 05:54:40 -- host/perf.sh@66 -- # sync 00:14:18.654 05:54:40 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:18.911 05:54:40 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:14:18.911 05:54:40 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:14:18.911 05:54:40 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:14:19.169 05:54:40 -- host/perf.sh@72 -- # ls_guid=d77162cb-f3c2-461b-bed3-d008ee854403 00:14:19.169 05:54:40 -- host/perf.sh@73 -- # get_lvs_free_mb d77162cb-f3c2-461b-bed3-d008ee854403 00:14:19.169 05:54:40 -- common/autotest_common.sh@1353 -- # local lvs_uuid=d77162cb-f3c2-461b-bed3-d008ee854403 00:14:19.169 05:54:40 -- common/autotest_common.sh@1354 -- # local lvs_info 00:14:19.169 05:54:40 -- common/autotest_common.sh@1355 -- # local fc 00:14:19.169 05:54:40 -- common/autotest_common.sh@1356 -- # local cs 00:14:19.169 05:54:40 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:19.427 05:54:40 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:14:19.427 { 00:14:19.427 "uuid": "d77162cb-f3c2-461b-bed3-d008ee854403", 00:14:19.427 "name": "lvs_0", 00:14:19.427 "base_bdev": "Nvme0n1", 00:14:19.427 "total_data_clusters": 1278, 00:14:19.427 "free_clusters": 1278, 00:14:19.427 "block_size": 4096, 00:14:19.427 "cluster_size": 4194304 00:14:19.427 } 00:14:19.427 ]' 00:14:19.427 05:54:40 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="d77162cb-f3c2-461b-bed3-d008ee854403") .free_clusters' 00:14:19.427 05:54:41 -- common/autotest_common.sh@1358 -- # fc=1278 00:14:19.427 05:54:41 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="d77162cb-f3c2-461b-bed3-d008ee854403") .cluster_size' 00:14:19.684 5112 00:14:19.684 05:54:41 -- common/autotest_common.sh@1359 -- # cs=4194304 00:14:19.685 05:54:41 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:14:19.685 05:54:41 -- common/autotest_common.sh@1363 -- # echo 5112 00:14:19.685 05:54:41 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:14:19.685 05:54:41 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d77162cb-f3c2-461b-bed3-d008ee854403 lbd_0 5112 00:14:19.943 05:54:41 -- host/perf.sh@80 -- # lb_guid=08a74d96-5777-4dd5-bb74-12ac87d768b5 00:14:19.943 05:54:41 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 08a74d96-5777-4dd5-bb74-12ac87d768b5 lvs_n_0 00:14:20.200 05:54:41 -- host/perf.sh@83 -- # ls_nested_guid=57ad4ab9-8dfc-46ae-8728-43cf50937b5a 00:14:20.200 05:54:41 -- host/perf.sh@84 -- # get_lvs_free_mb 57ad4ab9-8dfc-46ae-8728-43cf50937b5a 00:14:20.200 05:54:41 -- common/autotest_common.sh@1353 -- # local lvs_uuid=57ad4ab9-8dfc-46ae-8728-43cf50937b5a 00:14:20.200 05:54:41 -- common/autotest_common.sh@1354 -- # local lvs_info 00:14:20.200 05:54:41 -- common/autotest_common.sh@1355 -- # local fc 00:14:20.200 05:54:41 -- common/autotest_common.sh@1356 -- # local cs 00:14:20.200 05:54:41 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:20.458 05:54:41 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:14:20.458 { 00:14:20.458 "uuid": "d77162cb-f3c2-461b-bed3-d008ee854403", 00:14:20.458 "name": "lvs_0", 00:14:20.458 "base_bdev": "Nvme0n1", 00:14:20.458 "total_data_clusters": 1278, 00:14:20.458 "free_clusters": 0, 00:14:20.458 "block_size": 4096, 00:14:20.458 "cluster_size": 4194304 00:14:20.458 }, 00:14:20.458 { 00:14:20.458 "uuid": "57ad4ab9-8dfc-46ae-8728-43cf50937b5a", 00:14:20.459 "name": "lvs_n_0", 00:14:20.459 "base_bdev": "08a74d96-5777-4dd5-bb74-12ac87d768b5", 00:14:20.459 "total_data_clusters": 1276, 00:14:20.459 "free_clusters": 1276, 00:14:20.459 "block_size": 4096, 00:14:20.459 "cluster_size": 4194304 00:14:20.459 } 00:14:20.459 ]' 00:14:20.459 05:54:41 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="57ad4ab9-8dfc-46ae-8728-43cf50937b5a") .free_clusters' 00:14:20.459 05:54:41 -- common/autotest_common.sh@1358 -- # fc=1276 00:14:20.459 05:54:41 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="57ad4ab9-8dfc-46ae-8728-43cf50937b5a") .cluster_size' 00:14:20.459 5104 00:14:20.459 05:54:42 -- common/autotest_common.sh@1359 -- # cs=4194304 00:14:20.459 05:54:42 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:14:20.459 05:54:42 -- common/autotest_common.sh@1363 -- # echo 5104 00:14:20.459 05:54:42 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:14:20.459 05:54:42 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 57ad4ab9-8dfc-46ae-8728-43cf50937b5a lbd_nest_0 5104 00:14:20.717 05:54:42 -- host/perf.sh@88 -- # lb_nested_guid=06fbdecc-4019-428b-af47-4bfc30a98738 00:14:20.717 05:54:42 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:21.042 05:54:42 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:14:21.042 05:54:42 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 06fbdecc-4019-428b-af47-4bfc30a98738 00:14:21.300 05:54:42 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.557 05:54:43 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:14:21.557 05:54:43 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:14:21.557 05:54:43 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:14:21.557 05:54:43 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:21.557 05:54:43 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:21.814 No valid NVMe controllers or AIO or URING devices found 00:14:21.814 Initializing NVMe Controllers 00:14:21.814 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:21.814 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:14:21.814 WARNING: Some requested NVMe devices were skipped 00:14:21.814 05:54:43 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:21.814 05:54:43 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:34.013 Initializing NVMe Controllers 00:14:34.013 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:34.013 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:34.013 Initialization complete. Launching workers. 00:14:34.013 ======================================================== 00:14:34.013 Latency(us) 00:14:34.013 Device Information : IOPS MiB/s Average min max 00:14:34.013 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 955.00 119.37 1047.45 323.59 7654.83 00:14:34.013 ======================================================== 00:14:34.013 Total : 955.00 119.37 1047.45 323.59 7654.83 00:14:34.013 00:14:34.013 05:54:53 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:14:34.013 05:54:53 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:34.013 05:54:53 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:34.013 No valid NVMe controllers or AIO or URING devices found 00:14:34.013 Initializing NVMe Controllers 00:14:34.013 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:34.013 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:14:34.013 WARNING: Some requested NVMe devices were skipped 00:14:34.013 05:54:53 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:34.013 05:54:53 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:43.992 Initializing NVMe Controllers 00:14:43.992 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:43.992 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:43.992 Initialization complete. Launching workers. 00:14:43.992 ======================================================== 00:14:43.992 Latency(us) 00:14:43.992 Device Information : IOPS MiB/s Average min max 00:14:43.992 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1355.80 169.47 23625.42 5265.94 63865.24 00:14:43.992 ======================================================== 00:14:43.992 Total : 1355.80 169.47 23625.42 5265.94 63865.24 00:14:43.992 00:14:43.992 05:55:04 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:14:43.992 05:55:04 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:43.992 05:55:04 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:43.992 No valid NVMe controllers or AIO or URING devices found 00:14:43.992 Initializing NVMe Controllers 00:14:43.993 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:43.993 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:14:43.993 WARNING: Some requested NVMe devices were skipped 00:14:43.993 05:55:04 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:43.993 05:55:04 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:53.970 Initializing NVMe Controllers 00:14:53.970 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:53.970 Controller IO queue size 128, less than required. 00:14:53.970 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:53.970 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:53.970 Initialization complete. Launching workers. 00:14:53.970 ======================================================== 00:14:53.970 Latency(us) 00:14:53.970 Device Information : IOPS MiB/s Average min max 00:14:53.970 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4011.33 501.42 31973.58 11568.58 63678.30 00:14:53.970 ======================================================== 00:14:53.970 Total : 4011.33 501.42 31973.58 11568.58 63678.30 00:14:53.970 00:14:53.970 05:55:14 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:53.970 05:55:15 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 06fbdecc-4019-428b-af47-4bfc30a98738 00:14:53.970 05:55:15 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:14:54.228 05:55:15 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 08a74d96-5777-4dd5-bb74-12ac87d768b5 00:14:54.795 05:55:16 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:14:54.795 05:55:16 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:54.795 05:55:16 -- host/perf.sh@114 -- # nvmftestfini 00:14:54.795 05:55:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:54.795 05:55:16 -- nvmf/common.sh@116 -- # sync 00:14:54.795 05:55:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:54.795 05:55:16 -- nvmf/common.sh@119 -- # set +e 00:14:54.795 05:55:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:54.795 05:55:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:54.795 rmmod nvme_tcp 00:14:55.053 rmmod nvme_fabrics 00:14:55.053 rmmod nvme_keyring 00:14:55.053 05:55:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:55.053 05:55:16 -- nvmf/common.sh@123 -- # set -e 00:14:55.053 05:55:16 -- nvmf/common.sh@124 -- # return 0 00:14:55.053 05:55:16 -- nvmf/common.sh@477 -- # '[' -n 80217 ']' 00:14:55.053 05:55:16 -- nvmf/common.sh@478 -- # killprocess 80217 00:14:55.053 05:55:16 -- common/autotest_common.sh@936 -- # '[' -z 80217 ']' 00:14:55.053 05:55:16 -- common/autotest_common.sh@940 -- # kill -0 80217 00:14:55.053 05:55:16 -- common/autotest_common.sh@941 -- # uname 00:14:55.053 05:55:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:55.053 05:55:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80217 00:14:55.053 killing process with pid 80217 00:14:55.053 05:55:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:55.053 05:55:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:55.053 05:55:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80217' 00:14:55.053 05:55:16 -- common/autotest_common.sh@955 -- # kill 80217 00:14:55.053 05:55:16 -- common/autotest_common.sh@960 -- # wait 80217 00:14:56.431 05:55:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:56.431 05:55:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:56.431 05:55:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:56.431 05:55:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:56.431 05:55:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:56.431 05:55:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.431 05:55:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.431 05:55:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.431 05:55:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:56.431 ************************************ 00:14:56.431 END TEST nvmf_perf 00:14:56.431 ************************************ 00:14:56.431 00:14:56.431 real 0m50.331s 00:14:56.431 user 3m10.220s 00:14:56.431 sys 0m12.480s 00:14:56.431 05:55:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:56.431 05:55:17 -- common/autotest_common.sh@10 -- # set +x 00:14:56.431 05:55:18 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:56.431 05:55:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:56.431 05:55:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:56.431 05:55:18 -- common/autotest_common.sh@10 -- # set +x 00:14:56.431 ************************************ 00:14:56.431 START TEST nvmf_fio_host 00:14:56.431 ************************************ 00:14:56.431 05:55:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:56.691 * Looking for test storage... 00:14:56.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:56.691 05:55:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:56.691 05:55:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:56.691 05:55:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:56.691 05:55:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:56.691 05:55:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:56.691 05:55:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:56.691 05:55:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:56.691 05:55:18 -- scripts/common.sh@335 -- # IFS=.-: 00:14:56.691 05:55:18 -- scripts/common.sh@335 -- # read -ra ver1 00:14:56.691 05:55:18 -- scripts/common.sh@336 -- # IFS=.-: 00:14:56.691 05:55:18 -- scripts/common.sh@336 -- # read -ra ver2 00:14:56.691 05:55:18 -- scripts/common.sh@337 -- # local 'op=<' 00:14:56.691 05:55:18 -- scripts/common.sh@339 -- # ver1_l=2 00:14:56.691 05:55:18 -- scripts/common.sh@340 -- # ver2_l=1 00:14:56.691 05:55:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:56.691 05:55:18 -- scripts/common.sh@343 -- # case "$op" in 00:14:56.691 05:55:18 -- scripts/common.sh@344 -- # : 1 00:14:56.691 05:55:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:56.691 05:55:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:56.691 05:55:18 -- scripts/common.sh@364 -- # decimal 1 00:14:56.691 05:55:18 -- scripts/common.sh@352 -- # local d=1 00:14:56.691 05:55:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:56.691 05:55:18 -- scripts/common.sh@354 -- # echo 1 00:14:56.691 05:55:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:56.691 05:55:18 -- scripts/common.sh@365 -- # decimal 2 00:14:56.691 05:55:18 -- scripts/common.sh@352 -- # local d=2 00:14:56.691 05:55:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:56.691 05:55:18 -- scripts/common.sh@354 -- # echo 2 00:14:56.691 05:55:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:56.691 05:55:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:56.691 05:55:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:56.691 05:55:18 -- scripts/common.sh@367 -- # return 0 00:14:56.691 05:55:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:56.691 05:55:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:56.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.691 --rc genhtml_branch_coverage=1 00:14:56.691 --rc genhtml_function_coverage=1 00:14:56.691 --rc genhtml_legend=1 00:14:56.691 --rc geninfo_all_blocks=1 00:14:56.691 --rc geninfo_unexecuted_blocks=1 00:14:56.691 00:14:56.691 ' 00:14:56.691 05:55:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:56.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.691 --rc genhtml_branch_coverage=1 00:14:56.691 --rc genhtml_function_coverage=1 00:14:56.691 --rc genhtml_legend=1 00:14:56.691 --rc geninfo_all_blocks=1 00:14:56.691 --rc geninfo_unexecuted_blocks=1 00:14:56.691 00:14:56.691 ' 00:14:56.692 05:55:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:56.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.692 --rc genhtml_branch_coverage=1 00:14:56.692 --rc genhtml_function_coverage=1 00:14:56.692 --rc genhtml_legend=1 00:14:56.692 --rc geninfo_all_blocks=1 00:14:56.692 --rc geninfo_unexecuted_blocks=1 00:14:56.692 00:14:56.692 ' 00:14:56.692 05:55:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:56.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.692 --rc genhtml_branch_coverage=1 00:14:56.692 --rc genhtml_function_coverage=1 00:14:56.692 --rc genhtml_legend=1 00:14:56.692 --rc geninfo_all_blocks=1 00:14:56.692 --rc geninfo_unexecuted_blocks=1 00:14:56.692 00:14:56.692 ' 00:14:56.692 05:55:18 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:56.692 05:55:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.692 05:55:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.692 05:55:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.692 05:55:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.692 05:55:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.692 05:55:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.692 05:55:18 -- paths/export.sh@5 -- # export PATH 00:14:56.692 05:55:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.692 05:55:18 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:56.692 05:55:18 -- nvmf/common.sh@7 -- # uname -s 00:14:56.692 05:55:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.692 05:55:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.692 05:55:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.692 05:55:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.692 05:55:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.692 05:55:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.692 05:55:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.692 05:55:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.692 05:55:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.692 05:55:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.692 05:55:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:14:56.692 05:55:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:14:56.692 05:55:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.692 05:55:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.692 05:55:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:56.692 05:55:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:56.692 05:55:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.692 05:55:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.692 05:55:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.692 05:55:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.692 05:55:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.692 05:55:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.692 05:55:18 -- paths/export.sh@5 -- # export PATH 00:14:56.692 05:55:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.692 05:55:18 -- nvmf/common.sh@46 -- # : 0 00:14:56.692 05:55:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:56.692 05:55:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:56.692 05:55:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:56.692 05:55:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.692 05:55:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.692 05:55:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:56.692 05:55:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:56.692 05:55:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:56.692 05:55:18 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:56.692 05:55:18 -- host/fio.sh@14 -- # nvmftestinit 00:14:56.692 05:55:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:56.692 05:55:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.692 05:55:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:56.692 05:55:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:56.692 05:55:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:56.692 05:55:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.692 05:55:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.692 05:55:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.692 05:55:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:56.692 05:55:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:56.692 05:55:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:56.692 05:55:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:56.692 05:55:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:56.692 05:55:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:56.692 05:55:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.692 05:55:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.692 05:55:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:56.692 05:55:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:56.692 05:55:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:56.692 05:55:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:56.692 05:55:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:56.692 05:55:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.692 05:55:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:56.692 05:55:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:56.692 05:55:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:56.692 05:55:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:56.692 05:55:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:56.692 05:55:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:56.692 Cannot find device "nvmf_tgt_br" 00:14:56.692 05:55:18 -- nvmf/common.sh@154 -- # true 00:14:56.692 05:55:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:56.692 Cannot find device "nvmf_tgt_br2" 00:14:56.692 05:55:18 -- nvmf/common.sh@155 -- # true 00:14:56.692 05:55:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:56.692 05:55:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:56.692 Cannot find device "nvmf_tgt_br" 00:14:56.692 05:55:18 -- nvmf/common.sh@157 -- # true 00:14:56.692 05:55:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:56.692 Cannot find device "nvmf_tgt_br2" 00:14:56.692 05:55:18 -- nvmf/common.sh@158 -- # true 00:14:56.692 05:55:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:56.951 05:55:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:56.951 05:55:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:56.951 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.951 05:55:18 -- nvmf/common.sh@161 -- # true 00:14:56.951 05:55:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:56.951 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.951 05:55:18 -- nvmf/common.sh@162 -- # true 00:14:56.951 05:55:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:56.951 05:55:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:56.951 05:55:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:56.951 05:55:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:56.951 05:55:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:56.951 05:55:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:56.951 05:55:18 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:56.951 05:55:18 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:56.951 05:55:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:56.951 05:55:18 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:56.951 05:55:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:56.951 05:55:18 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:56.951 05:55:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:56.951 05:55:18 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:56.951 05:55:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:56.951 05:55:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:56.951 05:55:18 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:56.951 05:55:18 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:56.951 05:55:18 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:56.951 05:55:18 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:56.951 05:55:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:56.951 05:55:18 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:56.951 05:55:18 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:56.951 05:55:18 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:56.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:14:56.951 00:14:56.951 --- 10.0.0.2 ping statistics --- 00:14:56.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.952 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:56.952 05:55:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:56.952 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:56.952 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:14:56.952 00:14:56.952 --- 10.0.0.3 ping statistics --- 00:14:56.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.952 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:56.952 05:55:18 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:56.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:14:56.952 00:14:56.952 --- 10.0.0.1 ping statistics --- 00:14:56.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.952 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:14:56.952 05:55:18 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.952 05:55:18 -- nvmf/common.sh@421 -- # return 0 00:14:56.952 05:55:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:56.952 05:55:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.952 05:55:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:56.952 05:55:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:56.952 05:55:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.952 05:55:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:56.952 05:55:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:56.952 05:55:18 -- host/fio.sh@16 -- # [[ y != y ]] 00:14:56.952 05:55:18 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:14:56.952 05:55:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:56.952 05:55:18 -- common/autotest_common.sh@10 -- # set +x 00:14:56.952 05:55:18 -- host/fio.sh@24 -- # nvmfpid=81038 00:14:56.952 05:55:18 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:56.952 05:55:18 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:56.952 05:55:18 -- host/fio.sh@28 -- # waitforlisten 81038 00:14:56.952 05:55:18 -- common/autotest_common.sh@829 -- # '[' -z 81038 ']' 00:14:56.952 05:55:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.952 05:55:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:56.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.952 05:55:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.210 05:55:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:57.211 05:55:18 -- common/autotest_common.sh@10 -- # set +x 00:14:57.211 [2024-12-15 05:55:18.633710] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:57.211 [2024-12-15 05:55:18.633839] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.211 [2024-12-15 05:55:18.769629] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:57.211 [2024-12-15 05:55:18.804400] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:57.211 [2024-12-15 05:55:18.804772] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.211 [2024-12-15 05:55:18.804793] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.211 [2024-12-15 05:55:18.804802] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.211 [2024-12-15 05:55:18.804944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.211 [2024-12-15 05:55:18.805093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.211 [2024-12-15 05:55:18.805228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.211 [2024-12-15 05:55:18.805229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.469 05:55:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:57.469 05:55:18 -- common/autotest_common.sh@862 -- # return 0 00:14:57.469 05:55:18 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:57.728 [2024-12-15 05:55:19.154993] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.728 05:55:19 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:14:57.728 05:55:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:57.728 05:55:19 -- common/autotest_common.sh@10 -- # set +x 00:14:57.728 05:55:19 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:57.986 Malloc1 00:14:57.986 05:55:19 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:58.245 05:55:19 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:58.503 05:55:20 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:58.761 [2024-12-15 05:55:20.278590] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.761 05:55:20 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:59.020 05:55:20 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:59.020 05:55:20 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:59.020 05:55:20 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:59.020 05:55:20 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:59.020 05:55:20 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:59.020 05:55:20 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:59.020 05:55:20 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:59.020 05:55:20 -- common/autotest_common.sh@1330 -- # shift 00:14:59.020 05:55:20 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:59.020 05:55:20 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:59.020 05:55:20 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:59.020 05:55:20 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:59.020 05:55:20 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:59.020 05:55:20 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:59.020 05:55:20 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:59.020 05:55:20 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:59.020 05:55:20 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:59.020 05:55:20 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:14:59.020 05:55:20 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:59.020 05:55:20 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:59.020 05:55:20 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:59.020 05:55:20 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:59.020 05:55:20 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:59.279 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:59.279 fio-3.35 00:14:59.279 Starting 1 thread 00:15:01.811 00:15:01.811 test: (groupid=0, jobs=1): err= 0: pid=81113: Sun Dec 15 05:55:22 2024 00:15:01.811 read: IOPS=9472, BW=37.0MiB/s (38.8MB/s)(74.2MiB/2006msec) 00:15:01.811 slat (nsec): min=1873, max=326481, avg=2453.55, stdev=3596.17 00:15:01.811 clat (usec): min=2559, max=12644, avg=7032.64, stdev=589.10 00:15:01.811 lat (usec): min=2598, max=12646, avg=7035.09, stdev=588.98 00:15:01.811 clat percentiles (usec): 00:15:01.811 | 1.00th=[ 5866], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6587], 00:15:01.811 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7111], 00:15:01.811 | 70.00th=[ 7308], 80.00th=[ 7439], 90.00th=[ 7701], 95.00th=[ 7898], 00:15:01.811 | 99.00th=[ 8455], 99.50th=[10028], 99.90th=[11338], 99.95th=[11731], 00:15:01.811 | 99.99th=[12649] 00:15:01.811 bw ( KiB/s): min=36766, max=38976, per=99.93%, avg=37863.50, stdev=968.77, samples=4 00:15:01.811 iops : min= 9191, max= 9744, avg=9465.75, stdev=242.38, samples=4 00:15:01.811 write: IOPS=9477, BW=37.0MiB/s (38.8MB/s)(74.3MiB/2006msec); 0 zone resets 00:15:01.811 slat (nsec): min=1933, max=264667, avg=2527.35, stdev=2543.96 00:15:01.811 clat (usec): min=2400, max=12308, avg=6423.34, stdev=539.05 00:15:01.811 lat (usec): min=2414, max=12310, avg=6425.87, stdev=538.99 00:15:01.811 clat percentiles (usec): 00:15:01.811 | 1.00th=[ 5342], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 6063], 00:15:01.811 | 30.00th=[ 6194], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6521], 00:15:01.811 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 6980], 95.00th=[ 7242], 00:15:01.811 | 99.00th=[ 7701], 99.50th=[ 8094], 99.90th=[10814], 99.95th=[11600], 00:15:01.811 | 99.99th=[12256] 00:15:01.811 bw ( KiB/s): min=37368, max=38784, per=99.90%, avg=37871.00, stdev=625.44, samples=4 00:15:01.811 iops : min= 9342, max= 9696, avg=9467.75, stdev=156.36, samples=4 00:15:01.811 lat (msec) : 4=0.08%, 10=99.56%, 20=0.36% 00:15:01.811 cpu : usr=70.27%, sys=21.90%, ctx=20, majf=0, minf=5 00:15:01.811 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:01.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.811 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:01.811 issued rwts: total=19001,19012,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:01.811 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:01.811 00:15:01.811 Run status group 0 (all jobs): 00:15:01.811 READ: bw=37.0MiB/s (38.8MB/s), 37.0MiB/s-37.0MiB/s (38.8MB/s-38.8MB/s), io=74.2MiB (77.8MB), run=2006-2006msec 00:15:01.811 WRITE: bw=37.0MiB/s (38.8MB/s), 37.0MiB/s-37.0MiB/s (38.8MB/s-38.8MB/s), io=74.3MiB (77.9MB), run=2006-2006msec 00:15:01.811 05:55:23 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:01.811 05:55:23 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:01.811 05:55:23 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:01.811 05:55:23 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:01.811 05:55:23 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:01.811 05:55:23 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:01.811 05:55:23 -- common/autotest_common.sh@1330 -- # shift 00:15:01.811 05:55:23 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:01.811 05:55:23 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:01.811 05:55:23 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:01.811 05:55:23 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:01.811 05:55:23 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:01.811 05:55:23 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:01.811 05:55:23 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:01.811 05:55:23 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:01.811 05:55:23 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:01.811 05:55:23 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:01.811 05:55:23 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:15:01.811 05:55:23 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:01.811 05:55:23 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:01.812 05:55:23 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:01.812 05:55:23 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:01.812 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:01.812 fio-3.35 00:15:01.812 Starting 1 thread 00:15:04.343 00:15:04.343 test: (groupid=0, jobs=1): err= 0: pid=81157: Sun Dec 15 05:55:25 2024 00:15:04.343 read: IOPS=8266, BW=129MiB/s (135MB/s)(259MiB/2008msec) 00:15:04.343 slat (usec): min=2, max=148, avg= 4.18, stdev= 3.17 00:15:04.343 clat (usec): min=2585, max=22944, avg=8419.49, stdev=2989.18 00:15:04.343 lat (usec): min=2589, max=22955, avg=8423.67, stdev=2989.91 00:15:04.343 clat percentiles (usec): 00:15:04.343 | 1.00th=[ 3916], 5.00th=[ 4686], 10.00th=[ 5145], 20.00th=[ 5800], 00:15:04.343 | 30.00th=[ 6390], 40.00th=[ 7111], 50.00th=[ 7832], 60.00th=[ 8586], 00:15:04.343 | 70.00th=[ 9634], 80.00th=[10945], 90.00th=[12780], 95.00th=[13829], 00:15:04.343 | 99.00th=[16909], 99.50th=[18744], 99.90th=[21627], 99.95th=[21890], 00:15:04.343 | 99.99th=[22938] 00:15:04.343 bw ( KiB/s): min=61477, max=80160, per=51.59%, avg=68241.25, stdev=8187.98, samples=4 00:15:04.343 iops : min= 3842, max= 5010, avg=4265.00, stdev=511.83, samples=4 00:15:04.343 write: IOPS=4704, BW=73.5MiB/s (77.1MB/s)(139MiB/1888msec); 0 zone resets 00:15:04.343 slat (usec): min=32, max=472, avg=39.79, stdev= 9.68 00:15:04.343 clat (usec): min=5586, max=30203, avg=12447.85, stdev=3342.09 00:15:04.343 lat (usec): min=5633, max=30259, avg=12487.65, stdev=3344.73 00:15:04.343 clat percentiles (usec): 00:15:04.343 | 1.00th=[ 8225], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10290], 00:15:04.343 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11731], 60.00th=[12256], 00:15:04.343 | 70.00th=[12911], 80.00th=[13829], 90.00th=[15270], 95.00th=[16712], 00:15:04.343 | 99.00th=[27657], 99.50th=[28705], 99.90th=[29492], 99.95th=[29492], 00:15:04.343 | 99.99th=[30278] 00:15:04.343 bw ( KiB/s): min=63265, max=81664, per=93.84%, avg=70640.25, stdev=7823.25, samples=4 00:15:04.343 iops : min= 3954, max= 5104, avg=4415.00, stdev=488.97, samples=4 00:15:04.343 lat (msec) : 4=0.90%, 10=52.05%, 20=45.65%, 50=1.40% 00:15:04.343 cpu : usr=80.23%, sys=13.99%, ctx=6, majf=0, minf=1 00:15:04.343 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:04.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:04.343 issued rwts: total=16599,8883,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:04.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:04.343 00:15:04.343 Run status group 0 (all jobs): 00:15:04.343 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=259MiB (272MB), run=2008-2008msec 00:15:04.343 WRITE: bw=73.5MiB/s (77.1MB/s), 73.5MiB/s-73.5MiB/s (77.1MB/s-77.1MB/s), io=139MiB (146MB), run=1888-1888msec 00:15:04.343 05:55:25 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:04.343 05:55:25 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:15:04.343 05:55:25 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:15:04.343 05:55:25 -- host/fio.sh@51 -- # get_nvme_bdfs 00:15:04.343 05:55:25 -- common/autotest_common.sh@1508 -- # bdfs=() 00:15:04.343 05:55:25 -- common/autotest_common.sh@1508 -- # local bdfs 00:15:04.343 05:55:25 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:04.343 05:55:25 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:04.343 05:55:25 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:15:04.343 05:55:25 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:15:04.343 05:55:25 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:15:04.343 05:55:25 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:15:04.602 Nvme0n1 00:15:04.602 05:55:26 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:15:05.206 05:55:26 -- host/fio.sh@53 -- # ls_guid=53c0f377-b923-4801-aeeb-8ddc81fba5ef 00:15:05.206 05:55:26 -- host/fio.sh@54 -- # get_lvs_free_mb 53c0f377-b923-4801-aeeb-8ddc81fba5ef 00:15:05.206 05:55:26 -- common/autotest_common.sh@1353 -- # local lvs_uuid=53c0f377-b923-4801-aeeb-8ddc81fba5ef 00:15:05.206 05:55:26 -- common/autotest_common.sh@1354 -- # local lvs_info 00:15:05.206 05:55:26 -- common/autotest_common.sh@1355 -- # local fc 00:15:05.206 05:55:26 -- common/autotest_common.sh@1356 -- # local cs 00:15:05.206 05:55:26 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:05.206 05:55:26 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:15:05.206 { 00:15:05.206 "uuid": "53c0f377-b923-4801-aeeb-8ddc81fba5ef", 00:15:05.206 "name": "lvs_0", 00:15:05.206 "base_bdev": "Nvme0n1", 00:15:05.206 "total_data_clusters": 4, 00:15:05.206 "free_clusters": 4, 00:15:05.206 "block_size": 4096, 00:15:05.206 "cluster_size": 1073741824 00:15:05.206 } 00:15:05.206 ]' 00:15:05.206 05:55:26 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="53c0f377-b923-4801-aeeb-8ddc81fba5ef") .free_clusters' 00:15:05.465 05:55:26 -- common/autotest_common.sh@1358 -- # fc=4 00:15:05.465 05:55:26 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="53c0f377-b923-4801-aeeb-8ddc81fba5ef") .cluster_size' 00:15:05.465 4096 00:15:05.465 05:55:26 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:15:05.465 05:55:26 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:15:05.465 05:55:26 -- common/autotest_common.sh@1363 -- # echo 4096 00:15:05.465 05:55:26 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:15:05.724 63267028-6a44-4adc-ba0c-adb96fe4ca4d 00:15:05.724 05:55:27 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:15:05.982 05:55:27 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:15:06.241 05:55:27 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:06.500 05:55:27 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:06.500 05:55:27 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:06.500 05:55:27 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:06.500 05:55:27 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:06.500 05:55:27 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:06.500 05:55:27 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:06.500 05:55:27 -- common/autotest_common.sh@1330 -- # shift 00:15:06.500 05:55:27 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:06.500 05:55:27 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:06.500 05:55:27 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:06.500 05:55:27 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:06.500 05:55:27 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:06.500 05:55:27 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:06.500 05:55:27 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:06.500 05:55:27 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:06.500 05:55:27 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:06.500 05:55:27 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:06.500 05:55:27 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:15:06.500 05:55:27 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:06.500 05:55:27 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:06.500 05:55:27 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:06.500 05:55:27 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:06.500 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:06.500 fio-3.35 00:15:06.500 Starting 1 thread 00:15:09.033 00:15:09.033 test: (groupid=0, jobs=1): err= 0: pid=81271: Sun Dec 15 05:55:30 2024 00:15:09.033 read: IOPS=6441, BW=25.2MiB/s (26.4MB/s)(50.5MiB/2009msec) 00:15:09.033 slat (usec): min=2, max=314, avg= 2.91, stdev= 3.90 00:15:09.033 clat (usec): min=3062, max=17964, avg=10361.84, stdev=855.07 00:15:09.033 lat (usec): min=3072, max=17966, avg=10364.75, stdev=854.78 00:15:09.033 clat percentiles (usec): 00:15:09.033 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:15:09.033 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:15:09.033 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11338], 95.00th=[11731], 00:15:09.033 | 99.00th=[12256], 99.50th=[12518], 99.90th=[16057], 99.95th=[16909], 00:15:09.033 | 99.99th=[17171] 00:15:09.033 bw ( KiB/s): min=24487, max=26408, per=99.93%, avg=25745.75, stdev=880.90, samples=4 00:15:09.033 iops : min= 6121, max= 6602, avg=6436.25, stdev=220.58, samples=4 00:15:09.033 write: IOPS=6448, BW=25.2MiB/s (26.4MB/s)(50.6MiB/2009msec); 0 zone resets 00:15:09.033 slat (usec): min=2, max=259, avg= 3.01, stdev= 2.84 00:15:09.033 clat (usec): min=2478, max=17335, avg=9410.80, stdev=825.17 00:15:09.033 lat (usec): min=2491, max=17338, avg=9413.82, stdev=825.03 00:15:09.033 clat percentiles (usec): 00:15:09.033 | 1.00th=[ 7635], 5.00th=[ 8225], 10.00th=[ 8455], 20.00th=[ 8848], 00:15:09.033 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[ 9634], 00:15:09.033 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10290], 95.00th=[10552], 00:15:09.033 | 99.00th=[11207], 99.50th=[11469], 99.90th=[16188], 99.95th=[16909], 00:15:09.033 | 99.99th=[17433] 00:15:09.033 bw ( KiB/s): min=25600, max=25984, per=99.89%, avg=25765.00, stdev=186.63, samples=4 00:15:09.033 iops : min= 6400, max= 6496, avg=6441.25, stdev=46.66, samples=4 00:15:09.033 lat (msec) : 4=0.06%, 10=55.82%, 20=44.12% 00:15:09.033 cpu : usr=71.26%, sys=22.81%, ctx=8, majf=0, minf=5 00:15:09.033 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:09.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:09.033 issued rwts: total=12940,12955,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:09.033 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:09.033 00:15:09.033 Run status group 0 (all jobs): 00:15:09.033 READ: bw=25.2MiB/s (26.4MB/s), 25.2MiB/s-25.2MiB/s (26.4MB/s-26.4MB/s), io=50.5MiB (53.0MB), run=2009-2009msec 00:15:09.033 WRITE: bw=25.2MiB/s (26.4MB/s), 25.2MiB/s-25.2MiB/s (26.4MB/s-26.4MB/s), io=50.6MiB (53.1MB), run=2009-2009msec 00:15:09.033 05:55:30 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:09.292 05:55:30 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:15:09.292 05:55:30 -- host/fio.sh@64 -- # ls_nested_guid=56001c47-9867-4168-a14f-b5308e6e32ea 00:15:09.292 05:55:30 -- host/fio.sh@65 -- # get_lvs_free_mb 56001c47-9867-4168-a14f-b5308e6e32ea 00:15:09.292 05:55:30 -- common/autotest_common.sh@1353 -- # local lvs_uuid=56001c47-9867-4168-a14f-b5308e6e32ea 00:15:09.292 05:55:30 -- common/autotest_common.sh@1354 -- # local lvs_info 00:15:09.292 05:55:30 -- common/autotest_common.sh@1355 -- # local fc 00:15:09.292 05:55:30 -- common/autotest_common.sh@1356 -- # local cs 00:15:09.292 05:55:30 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:09.551 05:55:31 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:15:09.551 { 00:15:09.551 "uuid": "53c0f377-b923-4801-aeeb-8ddc81fba5ef", 00:15:09.551 "name": "lvs_0", 00:15:09.551 "base_bdev": "Nvme0n1", 00:15:09.551 "total_data_clusters": 4, 00:15:09.551 "free_clusters": 0, 00:15:09.551 "block_size": 4096, 00:15:09.551 "cluster_size": 1073741824 00:15:09.551 }, 00:15:09.551 { 00:15:09.551 "uuid": "56001c47-9867-4168-a14f-b5308e6e32ea", 00:15:09.551 "name": "lvs_n_0", 00:15:09.551 "base_bdev": "63267028-6a44-4adc-ba0c-adb96fe4ca4d", 00:15:09.551 "total_data_clusters": 1022, 00:15:09.551 "free_clusters": 1022, 00:15:09.551 "block_size": 4096, 00:15:09.551 "cluster_size": 4194304 00:15:09.551 } 00:15:09.551 ]' 00:15:09.551 05:55:31 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="56001c47-9867-4168-a14f-b5308e6e32ea") .free_clusters' 00:15:09.810 05:55:31 -- common/autotest_common.sh@1358 -- # fc=1022 00:15:09.810 05:55:31 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="56001c47-9867-4168-a14f-b5308e6e32ea") .cluster_size' 00:15:09.810 4088 00:15:09.810 05:55:31 -- common/autotest_common.sh@1359 -- # cs=4194304 00:15:09.810 05:55:31 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:15:09.810 05:55:31 -- common/autotest_common.sh@1363 -- # echo 4088 00:15:09.810 05:55:31 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:15:10.068 ffe04e29-ea17-41a7-9d59-e37b52cbe49a 00:15:10.069 05:55:31 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:15:10.327 05:55:31 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:15:10.586 05:55:32 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:15:10.846 05:55:32 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:10.846 05:55:32 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:10.846 05:55:32 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:10.846 05:55:32 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:10.846 05:55:32 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:10.846 05:55:32 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:10.846 05:55:32 -- common/autotest_common.sh@1330 -- # shift 00:15:10.846 05:55:32 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:10.846 05:55:32 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:10.846 05:55:32 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:10.846 05:55:32 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:10.846 05:55:32 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:10.846 05:55:32 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:10.846 05:55:32 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:10.846 05:55:32 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:10.846 05:55:32 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:10.846 05:55:32 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:15:10.846 05:55:32 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:10.846 05:55:32 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:10.846 05:55:32 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:10.846 05:55:32 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:10.846 05:55:32 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:10.846 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:10.846 fio-3.35 00:15:10.846 Starting 1 thread 00:15:13.379 00:15:13.379 test: (groupid=0, jobs=1): err= 0: pid=81350: Sun Dec 15 05:55:34 2024 00:15:13.379 read: IOPS=5762, BW=22.5MiB/s (23.6MB/s)(45.2MiB/2009msec) 00:15:13.379 slat (usec): min=2, max=303, avg= 2.82, stdev= 4.13 00:15:13.379 clat (usec): min=3268, max=20617, avg=11631.09, stdev=1008.26 00:15:13.379 lat (usec): min=3277, max=20620, avg=11633.91, stdev=1007.99 00:15:13.379 clat percentiles (usec): 00:15:13.379 | 1.00th=[ 9503], 5.00th=[10159], 10.00th=[10552], 20.00th=[10945], 00:15:13.379 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:15:13.379 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12780], 95.00th=[13173], 00:15:13.379 | 99.00th=[13829], 99.50th=[14353], 99.90th=[19530], 99.95th=[20055], 00:15:13.379 | 99.99th=[20579] 00:15:13.379 bw ( KiB/s): min=22248, max=23400, per=99.81%, avg=23004.00, stdev=514.56, samples=4 00:15:13.379 iops : min= 5562, max= 5850, avg=5751.00, stdev=128.64, samples=4 00:15:13.379 write: IOPS=5749, BW=22.5MiB/s (23.5MB/s)(45.1MiB/2009msec); 0 zone resets 00:15:13.379 slat (usec): min=2, max=245, avg= 2.96, stdev= 3.10 00:15:13.379 clat (usec): min=2304, max=18758, avg=10525.16, stdev=923.80 00:15:13.379 lat (usec): min=2317, max=18761, avg=10528.12, stdev=923.69 00:15:13.379 clat percentiles (usec): 00:15:13.379 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9765], 00:15:13.379 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:15:13.379 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:15:13.379 | 99.00th=[12518], 99.50th=[12911], 99.90th=[17433], 99.95th=[18482], 00:15:13.379 | 99.99th=[18744] 00:15:13.379 bw ( KiB/s): min=22848, max=23112, per=99.99%, avg=22994.00, stdev=123.83, samples=4 00:15:13.379 iops : min= 5712, max= 5778, avg=5748.50, stdev=30.96, samples=4 00:15:13.379 lat (msec) : 4=0.06%, 10=14.75%, 20=85.17%, 50=0.02% 00:15:13.379 cpu : usr=73.66%, sys=20.42%, ctx=19, majf=0, minf=5 00:15:13.379 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:15:13.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:13.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:13.380 issued rwts: total=11576,11550,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:13.380 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:13.380 00:15:13.380 Run status group 0 (all jobs): 00:15:13.380 READ: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=45.2MiB (47.4MB), run=2009-2009msec 00:15:13.380 WRITE: bw=22.5MiB/s (23.5MB/s), 22.5MiB/s-22.5MiB/s (23.5MB/s-23.5MB/s), io=45.1MiB (47.3MB), run=2009-2009msec 00:15:13.380 05:55:34 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:15:13.640 05:55:35 -- host/fio.sh@74 -- # sync 00:15:13.640 05:55:35 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:15:13.898 05:55:35 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:15:14.157 05:55:35 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:15:14.416 05:55:35 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:15:14.675 05:55:36 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:15:15.612 05:55:37 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:15.612 05:55:37 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:15.612 05:55:37 -- host/fio.sh@86 -- # nvmftestfini 00:15:15.612 05:55:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:15.612 05:55:37 -- nvmf/common.sh@116 -- # sync 00:15:15.612 05:55:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:15.612 05:55:37 -- nvmf/common.sh@119 -- # set +e 00:15:15.612 05:55:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:15.612 05:55:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:15.612 rmmod nvme_tcp 00:15:15.612 rmmod nvme_fabrics 00:15:15.612 rmmod nvme_keyring 00:15:15.612 05:55:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:15.612 05:55:37 -- nvmf/common.sh@123 -- # set -e 00:15:15.612 05:55:37 -- nvmf/common.sh@124 -- # return 0 00:15:15.612 05:55:37 -- nvmf/common.sh@477 -- # '[' -n 81038 ']' 00:15:15.612 05:55:37 -- nvmf/common.sh@478 -- # killprocess 81038 00:15:15.612 05:55:37 -- common/autotest_common.sh@936 -- # '[' -z 81038 ']' 00:15:15.612 05:55:37 -- common/autotest_common.sh@940 -- # kill -0 81038 00:15:15.612 05:55:37 -- common/autotest_common.sh@941 -- # uname 00:15:15.612 05:55:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:15.612 05:55:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81038 00:15:15.612 05:55:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:15.612 killing process with pid 81038 00:15:15.613 05:55:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:15.613 05:55:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81038' 00:15:15.613 05:55:37 -- common/autotest_common.sh@955 -- # kill 81038 00:15:15.613 05:55:37 -- common/autotest_common.sh@960 -- # wait 81038 00:15:15.872 05:55:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:15.872 05:55:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:15.872 05:55:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:15.872 05:55:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:15.872 05:55:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:15.872 05:55:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.872 05:55:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.872 05:55:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.872 05:55:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:15.872 00:15:15.872 real 0m19.279s 00:15:15.872 user 1m25.702s 00:15:15.872 sys 0m4.261s 00:15:15.872 ************************************ 00:15:15.872 05:55:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:15.872 05:55:37 -- common/autotest_common.sh@10 -- # set +x 00:15:15.872 END TEST nvmf_fio_host 00:15:15.872 ************************************ 00:15:15.872 05:55:37 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:15.872 05:55:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:15.872 05:55:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:15.872 05:55:37 -- common/autotest_common.sh@10 -- # set +x 00:15:15.872 ************************************ 00:15:15.872 START TEST nvmf_failover 00:15:15.872 ************************************ 00:15:15.872 05:55:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:15.872 * Looking for test storage... 00:15:15.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:15.872 05:55:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:15.872 05:55:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:15.872 05:55:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:16.131 05:55:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:16.131 05:55:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:16.131 05:55:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:16.131 05:55:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:16.131 05:55:37 -- scripts/common.sh@335 -- # IFS=.-: 00:15:16.131 05:55:37 -- scripts/common.sh@335 -- # read -ra ver1 00:15:16.131 05:55:37 -- scripts/common.sh@336 -- # IFS=.-: 00:15:16.131 05:55:37 -- scripts/common.sh@336 -- # read -ra ver2 00:15:16.131 05:55:37 -- scripts/common.sh@337 -- # local 'op=<' 00:15:16.131 05:55:37 -- scripts/common.sh@339 -- # ver1_l=2 00:15:16.131 05:55:37 -- scripts/common.sh@340 -- # ver2_l=1 00:15:16.131 05:55:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:16.131 05:55:37 -- scripts/common.sh@343 -- # case "$op" in 00:15:16.131 05:55:37 -- scripts/common.sh@344 -- # : 1 00:15:16.131 05:55:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:16.131 05:55:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:16.131 05:55:37 -- scripts/common.sh@364 -- # decimal 1 00:15:16.131 05:55:37 -- scripts/common.sh@352 -- # local d=1 00:15:16.131 05:55:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:16.131 05:55:37 -- scripts/common.sh@354 -- # echo 1 00:15:16.131 05:55:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:16.131 05:55:37 -- scripts/common.sh@365 -- # decimal 2 00:15:16.131 05:55:37 -- scripts/common.sh@352 -- # local d=2 00:15:16.131 05:55:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:16.131 05:55:37 -- scripts/common.sh@354 -- # echo 2 00:15:16.131 05:55:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:16.131 05:55:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:16.131 05:55:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:16.131 05:55:37 -- scripts/common.sh@367 -- # return 0 00:15:16.131 05:55:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:16.131 05:55:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:16.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.131 --rc genhtml_branch_coverage=1 00:15:16.131 --rc genhtml_function_coverage=1 00:15:16.131 --rc genhtml_legend=1 00:15:16.131 --rc geninfo_all_blocks=1 00:15:16.131 --rc geninfo_unexecuted_blocks=1 00:15:16.131 00:15:16.131 ' 00:15:16.131 05:55:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:16.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.131 --rc genhtml_branch_coverage=1 00:15:16.131 --rc genhtml_function_coverage=1 00:15:16.131 --rc genhtml_legend=1 00:15:16.131 --rc geninfo_all_blocks=1 00:15:16.131 --rc geninfo_unexecuted_blocks=1 00:15:16.131 00:15:16.131 ' 00:15:16.131 05:55:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:16.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.131 --rc genhtml_branch_coverage=1 00:15:16.131 --rc genhtml_function_coverage=1 00:15:16.131 --rc genhtml_legend=1 00:15:16.131 --rc geninfo_all_blocks=1 00:15:16.131 --rc geninfo_unexecuted_blocks=1 00:15:16.131 00:15:16.131 ' 00:15:16.131 05:55:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:16.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.131 --rc genhtml_branch_coverage=1 00:15:16.131 --rc genhtml_function_coverage=1 00:15:16.131 --rc genhtml_legend=1 00:15:16.131 --rc geninfo_all_blocks=1 00:15:16.131 --rc geninfo_unexecuted_blocks=1 00:15:16.131 00:15:16.131 ' 00:15:16.131 05:55:37 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:16.131 05:55:37 -- nvmf/common.sh@7 -- # uname -s 00:15:16.131 05:55:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.131 05:55:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.131 05:55:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.131 05:55:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.131 05:55:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.132 05:55:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.132 05:55:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.132 05:55:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.132 05:55:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.132 05:55:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.132 05:55:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:15:16.132 05:55:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:15:16.132 05:55:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.132 05:55:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.132 05:55:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:16.132 05:55:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:16.132 05:55:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.132 05:55:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.132 05:55:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.132 05:55:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.132 05:55:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.132 05:55:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.132 05:55:37 -- paths/export.sh@5 -- # export PATH 00:15:16.132 05:55:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.132 05:55:37 -- nvmf/common.sh@46 -- # : 0 00:15:16.132 05:55:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:16.132 05:55:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:16.132 05:55:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:16.132 05:55:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.132 05:55:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.132 05:55:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:16.132 05:55:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:16.132 05:55:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:16.132 05:55:37 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:16.132 05:55:37 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:16.132 05:55:37 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:16.132 05:55:37 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:16.132 05:55:37 -- host/failover.sh@18 -- # nvmftestinit 00:15:16.132 05:55:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:16.132 05:55:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.132 05:55:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:16.132 05:55:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:16.132 05:55:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:16.132 05:55:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.132 05:55:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.132 05:55:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.132 05:55:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:16.132 05:55:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:16.132 05:55:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:16.132 05:55:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:16.132 05:55:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:16.132 05:55:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:16.132 05:55:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:16.132 05:55:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:16.132 05:55:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:16.132 05:55:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:16.132 05:55:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:16.132 05:55:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:16.132 05:55:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:16.132 05:55:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:16.132 05:55:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:16.132 05:55:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:16.132 05:55:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:16.132 05:55:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:16.132 05:55:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:16.132 05:55:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:16.132 Cannot find device "nvmf_tgt_br" 00:15:16.132 05:55:37 -- nvmf/common.sh@154 -- # true 00:15:16.132 05:55:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:16.132 Cannot find device "nvmf_tgt_br2" 00:15:16.132 05:55:37 -- nvmf/common.sh@155 -- # true 00:15:16.132 05:55:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:16.132 05:55:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:16.132 Cannot find device "nvmf_tgt_br" 00:15:16.132 05:55:37 -- nvmf/common.sh@157 -- # true 00:15:16.132 05:55:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:16.132 Cannot find device "nvmf_tgt_br2" 00:15:16.132 05:55:37 -- nvmf/common.sh@158 -- # true 00:15:16.132 05:55:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:16.132 05:55:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:16.132 05:55:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:16.132 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.132 05:55:37 -- nvmf/common.sh@161 -- # true 00:15:16.132 05:55:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:16.132 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.132 05:55:37 -- nvmf/common.sh@162 -- # true 00:15:16.132 05:55:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:16.132 05:55:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:16.132 05:55:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:16.132 05:55:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:16.132 05:55:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:16.132 05:55:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:16.132 05:55:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:16.132 05:55:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:16.132 05:55:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:16.132 05:55:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:16.132 05:55:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:16.132 05:55:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:16.401 05:55:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:16.401 05:55:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:16.401 05:55:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:16.401 05:55:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:16.401 05:55:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:16.401 05:55:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:16.401 05:55:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:16.401 05:55:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:16.401 05:55:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:16.401 05:55:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:16.401 05:55:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:16.401 05:55:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:16.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:16.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:15:16.401 00:15:16.401 --- 10.0.0.2 ping statistics --- 00:15:16.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.401 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:16.401 05:55:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:16.401 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:16.401 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:15:16.401 00:15:16.401 --- 10.0.0.3 ping statistics --- 00:15:16.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.401 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:16.401 05:55:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:16.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:16.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:16.401 00:15:16.401 --- 10.0.0.1 ping statistics --- 00:15:16.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.401 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:16.401 05:55:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:16.401 05:55:37 -- nvmf/common.sh@421 -- # return 0 00:15:16.401 05:55:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:16.401 05:55:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:16.401 05:55:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:16.401 05:55:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:16.401 05:55:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:16.401 05:55:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:16.401 05:55:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:16.401 05:55:37 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:16.401 05:55:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:16.401 05:55:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:16.401 05:55:37 -- common/autotest_common.sh@10 -- # set +x 00:15:16.401 05:55:37 -- nvmf/common.sh@469 -- # nvmfpid=81606 00:15:16.401 05:55:37 -- nvmf/common.sh@470 -- # waitforlisten 81606 00:15:16.401 05:55:37 -- common/autotest_common.sh@829 -- # '[' -z 81606 ']' 00:15:16.401 05:55:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.401 05:55:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:16.401 05:55:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:16.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.401 05:55:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.401 05:55:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:16.401 05:55:37 -- common/autotest_common.sh@10 -- # set +x 00:15:16.401 [2024-12-15 05:55:37.928036] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:16.401 [2024-12-15 05:55:37.928113] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:16.673 [2024-12-15 05:55:38.067795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:16.674 [2024-12-15 05:55:38.109005] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:16.674 [2024-12-15 05:55:38.109214] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:16.674 [2024-12-15 05:55:38.109230] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:16.674 [2024-12-15 05:55:38.109241] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:16.674 [2024-12-15 05:55:38.109769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:16.674 [2024-12-15 05:55:38.109901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:16.674 [2024-12-15 05:55:38.109917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.276 05:55:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:17.276 05:55:38 -- common/autotest_common.sh@862 -- # return 0 00:15:17.276 05:55:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:17.276 05:55:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:17.276 05:55:38 -- common/autotest_common.sh@10 -- # set +x 00:15:17.535 05:55:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.535 05:55:38 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:17.794 [2024-12-15 05:55:39.192624] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:17.794 05:55:39 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:18.053 Malloc0 00:15:18.053 05:55:39 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:18.312 05:55:39 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:18.571 05:55:40 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:18.830 [2024-12-15 05:55:40.284257] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:18.830 05:55:40 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:19.088 [2024-12-15 05:55:40.528515] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:19.088 05:55:40 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:19.348 [2024-12-15 05:55:40.752678] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:19.348 05:55:40 -- host/failover.sh@31 -- # bdevperf_pid=81669 00:15:19.348 05:55:40 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:19.348 05:55:40 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:19.348 05:55:40 -- host/failover.sh@34 -- # waitforlisten 81669 /var/tmp/bdevperf.sock 00:15:19.348 05:55:40 -- common/autotest_common.sh@829 -- # '[' -z 81669 ']' 00:15:19.348 05:55:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:19.348 05:55:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:19.348 05:55:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:19.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:19.348 05:55:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:19.348 05:55:40 -- common/autotest_common.sh@10 -- # set +x 00:15:20.283 05:55:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:20.283 05:55:41 -- common/autotest_common.sh@862 -- # return 0 00:15:20.283 05:55:41 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:20.542 NVMe0n1 00:15:20.542 05:55:42 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:20.801 00:15:20.801 05:55:42 -- host/failover.sh@39 -- # run_test_pid=81693 00:15:20.801 05:55:42 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:20.801 05:55:42 -- host/failover.sh@41 -- # sleep 1 00:15:21.737 05:55:43 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:21.996 [2024-12-15 05:55:43.629089] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa2b0 is same with the state(5) to be set 00:15:21.996 [2024-12-15 05:55:43.629156] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa2b0 is same with the state(5) to be set 00:15:21.996 [2024-12-15 05:55:43.629167] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa2b0 is same with the state(5) to be set 00:15:21.996 [2024-12-15 05:55:43.629175] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa2b0 is same with the state(5) to be set 00:15:21.996 [2024-12-15 05:55:43.629183] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa2b0 is same with the state(5) to be set 00:15:21.996 [2024-12-15 05:55:43.629191] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa2b0 is same with the state(5) to be set 00:15:21.996 [2024-12-15 05:55:43.629199] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa2b0 is same with the state(5) to be set 00:15:21.996 [2024-12-15 05:55:43.629207] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa2b0 is same with the state(5) to be set 00:15:21.996 [2024-12-15 05:55:43.629214] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa2b0 is same with the state(5) to be set 00:15:21.996 [2024-12-15 05:55:43.629223] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa2b0 is same with the state(5) to be set 00:15:21.996 [2024-12-15 05:55:43.629231] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa2b0 is same with the state(5) to be set 00:15:21.996 [2024-12-15 05:55:43.629238] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa2b0 is same with the state(5) to be set 00:15:21.996 [2024-12-15 05:55:43.629246] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa2b0 is same with the state(5) to be set 00:15:21.996 [2024-12-15 05:55:43.629253] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa2b0 is same with the state(5) to be set 00:15:21.996 [2024-12-15 05:55:43.629261] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa2b0 is same with the state(5) to be set 00:15:21.996 [2024-12-15 05:55:43.629269] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa2b0 is same with the state(5) to be set 00:15:21.996 [2024-12-15 05:55:43.629277] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa2b0 is same with the state(5) to be set 00:15:21.996 [2024-12-15 05:55:43.629299] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa2b0 is same with the state(5) to be set 00:15:21.996 [2024-12-15 05:55:43.629307] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa2b0 is same with the state(5) to be set 00:15:21.996 [2024-12-15 05:55:43.629316] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa2b0 is same with the state(5) to be set 00:15:21.996 [2024-12-15 05:55:43.629324] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa2b0 is same with the state(5) to be set 00:15:21.996 [2024-12-15 05:55:43.629331] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa2b0 is same with the state(5) to be set 00:15:21.996 [2024-12-15 05:55:43.629338] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa2b0 is same with the state(5) to be set 00:15:22.255 05:55:43 -- host/failover.sh@45 -- # sleep 3 00:15:25.544 05:55:46 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:25.544 00:15:25.544 05:55:46 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:25.803 [2024-12-15 05:55:47.203922] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466b0 is same with the state(5) to be set 00:15:25.803 [2024-12-15 05:55:47.203999] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466b0 is same with the state(5) to be set 00:15:25.803 [2024-12-15 05:55:47.204010] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466b0 is same with the state(5) to be set 00:15:25.803 [2024-12-15 05:55:47.204018] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466b0 is same with the state(5) to be set 00:15:25.803 [2024-12-15 05:55:47.204025] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466b0 is same with the state(5) to be set 00:15:25.803 [2024-12-15 05:55:47.204033] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466b0 is same with the state(5) to be set 00:15:25.803 [2024-12-15 05:55:47.204041] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466b0 is same with the state(5) to be set 00:15:25.803 [2024-12-15 05:55:47.204048] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466b0 is same with the state(5) to be set 00:15:25.803 [2024-12-15 05:55:47.204055] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466b0 is same with the state(5) to be set 00:15:25.803 [2024-12-15 05:55:47.204063] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466b0 is same with the state(5) to be set 00:15:25.803 [2024-12-15 05:55:47.204071] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466b0 is same with the state(5) to be set 00:15:25.803 [2024-12-15 05:55:47.204078] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466b0 is same with the state(5) to be set 00:15:25.803 [2024-12-15 05:55:47.204086] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c466b0 is same with the state(5) to be set 00:15:25.803 05:55:47 -- host/failover.sh@50 -- # sleep 3 00:15:29.120 05:55:50 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:29.120 [2024-12-15 05:55:50.500091] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:29.120 05:55:50 -- host/failover.sh@55 -- # sleep 1 00:15:30.057 05:55:51 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:30.316 [2024-12-15 05:55:51.778313] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dedb20 is same with the state(5) to be set 00:15:30.316 [2024-12-15 05:55:51.778361] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dedb20 is same with the state(5) to be set 00:15:30.316 [2024-12-15 05:55:51.778371] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dedb20 is same with the state(5) to be set 00:15:30.316 [2024-12-15 05:55:51.778379] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dedb20 is same with the state(5) to be set 00:15:30.316 [2024-12-15 05:55:51.778387] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dedb20 is same with the state(5) to be set 00:15:30.316 [2024-12-15 05:55:51.778395] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dedb20 is same with the state(5) to be set 00:15:30.316 [2024-12-15 05:55:51.778402] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dedb20 is same with the state(5) to be set 00:15:30.316 [2024-12-15 05:55:51.778410] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dedb20 is same with the state(5) to be set 00:15:30.316 [2024-12-15 05:55:51.778418] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dedb20 is same with the state(5) to be set 00:15:30.316 [2024-12-15 05:55:51.778425] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dedb20 is same with the state(5) to be set 00:15:30.316 [2024-12-15 05:55:51.778433] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dedb20 is same with the state(5) to be set 00:15:30.316 [2024-12-15 05:55:51.778440] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dedb20 is same with the state(5) to be set 00:15:30.316 [2024-12-15 05:55:51.778447] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dedb20 is same with the state(5) to be set 00:15:30.316 [2024-12-15 05:55:51.778455] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dedb20 is same with the state(5) to be set 00:15:30.316 [2024-12-15 05:55:51.778462] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dedb20 is same with the state(5) to be set 00:15:30.316 [2024-12-15 05:55:51.778469] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dedb20 is same with the state(5) to be set 00:15:30.316 [2024-12-15 05:55:51.778477] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dedb20 is same with the state(5) to be set 00:15:30.316 [2024-12-15 05:55:51.778484] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dedb20 is same with the state(5) to be set 00:15:30.316 [2024-12-15 05:55:51.778491] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dedb20 is same with the state(5) to be set 00:15:30.316 [2024-12-15 05:55:51.778499] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dedb20 is same with the state(5) to be set 00:15:30.316 [2024-12-15 05:55:51.778506] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dedb20 is same with the state(5) to be set 00:15:30.316 [2024-12-15 05:55:51.778513] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dedb20 is same with the state(5) to be set 00:15:30.316 [2024-12-15 05:55:51.778537] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dedb20 is same with the state(5) to be set 00:15:30.316 [2024-12-15 05:55:51.778560] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dedb20 is same with the state(5) to be set 00:15:30.316 [2024-12-15 05:55:51.778570] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dedb20 is same with the state(5) to be set 00:15:30.316 [2024-12-15 05:55:51.778578] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dedb20 is same with the state(5) to be set 00:15:30.316 [2024-12-15 05:55:51.778586] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dedb20 is same with the state(5) to be set 00:15:30.316 [2024-12-15 05:55:51.778595] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dedb20 is same with the state(5) to be set 00:15:30.316 05:55:51 -- host/failover.sh@59 -- # wait 81693 00:15:36.885 0 00:15:36.885 05:55:57 -- host/failover.sh@61 -- # killprocess 81669 00:15:36.885 05:55:57 -- common/autotest_common.sh@936 -- # '[' -z 81669 ']' 00:15:36.885 05:55:57 -- common/autotest_common.sh@940 -- # kill -0 81669 00:15:36.885 05:55:57 -- common/autotest_common.sh@941 -- # uname 00:15:36.885 05:55:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:36.885 05:55:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81669 00:15:36.885 killing process with pid 81669 00:15:36.885 05:55:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:36.885 05:55:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:36.885 05:55:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81669' 00:15:36.885 05:55:57 -- common/autotest_common.sh@955 -- # kill 81669 00:15:36.885 05:55:57 -- common/autotest_common.sh@960 -- # wait 81669 00:15:36.885 05:55:57 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:36.885 [2024-12-15 05:55:40.824640] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:36.885 [2024-12-15 05:55:40.824798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81669 ] 00:15:36.885 [2024-12-15 05:55:40.984825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.885 [2024-12-15 05:55:41.034506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.885 Running I/O for 15 seconds... 00:15:36.885 [2024-12-15 05:55:43.629392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:124520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.885 [2024-12-15 05:55:43.629443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.885 [2024-12-15 05:55:43.629469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:123824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.885 [2024-12-15 05:55:43.629484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.885 [2024-12-15 05:55:43.629500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:123832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.885 [2024-12-15 05:55:43.629513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.885 [2024-12-15 05:55:43.629528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:123840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.885 [2024-12-15 05:55:43.629541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.885 [2024-12-15 05:55:43.629555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:123856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.885 [2024-12-15 05:55:43.629569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.885 [2024-12-15 05:55:43.629583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:123864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.885 [2024-12-15 05:55:43.629595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.885 [2024-12-15 05:55:43.629610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:123880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.885 [2024-12-15 05:55:43.629622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.885 [2024-12-15 05:55:43.629636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:123896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.885 [2024-12-15 05:55:43.629649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.885 [2024-12-15 05:55:43.629663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:123904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.885 [2024-12-15 05:55:43.629676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.885 [2024-12-15 05:55:43.629690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.885 [2024-12-15 05:55:43.629703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.885 [2024-12-15 05:55:43.629718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:124560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.885 [2024-12-15 05:55:43.629730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.885 [2024-12-15 05:55:43.629767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.885 [2024-12-15 05:55:43.629781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.885 [2024-12-15 05:55:43.629797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:124592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.885 [2024-12-15 05:55:43.629809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.885 [2024-12-15 05:55:43.629824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:124600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.885 [2024-12-15 05:55:43.629836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.629851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:124616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.886 [2024-12-15 05:55:43.629864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.629896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:123920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.886 [2024-12-15 05:55:43.629930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.629945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.886 [2024-12-15 05:55:43.629960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.629975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:123936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.886 [2024-12-15 05:55:43.629988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:123944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.886 [2024-12-15 05:55:43.630018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:123968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.886 [2024-12-15 05:55:43.630047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.886 [2024-12-15 05:55:43.630075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.886 [2024-12-15 05:55:43.630104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:124040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.886 [2024-12-15 05:55:43.630132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:124624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.886 [2024-12-15 05:55:43.630170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:124640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.886 [2024-12-15 05:55:43.630200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:124648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.886 [2024-12-15 05:55:43.630245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:124656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.886 [2024-12-15 05:55:43.630274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:124664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.886 [2024-12-15 05:55:43.630317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:124672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.886 [2024-12-15 05:55:43.630361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:124680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.886 [2024-12-15 05:55:43.630390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:124048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.886 [2024-12-15 05:55:43.630419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.886 [2024-12-15 05:55:43.630450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:124072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.886 [2024-12-15 05:55:43.630481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:124112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.886 [2024-12-15 05:55:43.630510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:124120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.886 [2024-12-15 05:55:43.630539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:124136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.886 [2024-12-15 05:55:43.630567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:124152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.886 [2024-12-15 05:55:43.630606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:124160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.886 [2024-12-15 05:55:43.630634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:124688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.886 [2024-12-15 05:55:43.630663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:124696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.886 [2024-12-15 05:55:43.630692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:124704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.886 [2024-12-15 05:55:43.630721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:124712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.886 [2024-12-15 05:55:43.630781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:124720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.886 [2024-12-15 05:55:43.630810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:124728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.886 [2024-12-15 05:55:43.630838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:124736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.886 [2024-12-15 05:55:43.630867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:124744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.886 [2024-12-15 05:55:43.630896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:124752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.886 [2024-12-15 05:55:43.630925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:124760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.886 [2024-12-15 05:55:43.630957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.630987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:124768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.886 [2024-12-15 05:55:43.631023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.631039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:124776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.886 [2024-12-15 05:55:43.631052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.631083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:124784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.886 [2024-12-15 05:55:43.631114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.631130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:124792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.886 [2024-12-15 05:55:43.631144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.631170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.886 [2024-12-15 05:55:43.631186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.631202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:124808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.886 [2024-12-15 05:55:43.631216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.886 [2024-12-15 05:55:43.631232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:124816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.887 [2024-12-15 05:55:43.631246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.631262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:124824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.887 [2024-12-15 05:55:43.631276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.631291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:124832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.631305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.631321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:124840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.887 [2024-12-15 05:55:43.631335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.631351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:124848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.631365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.631381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:124856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.631395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.631411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.887 [2024-12-15 05:55:43.631425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.631449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:124168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.631464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.631480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:124176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.631494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.631510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:124192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.631526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.631558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:124200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.631572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.631587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:124208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.631601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.631616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:124240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.631629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.631644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:124248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.631659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.631674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:124264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.631688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.631703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:124872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.631717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.631747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.631761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.631775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.887 [2024-12-15 05:55:43.631788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.631803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:124896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.631817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.631832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:124904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.631845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.631866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:124912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.631879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.631894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:124920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.631919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.631935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:124928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.631948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.631963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:124936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.887 [2024-12-15 05:55:43.631976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.631991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.887 [2024-12-15 05:55:43.632004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.632019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.887 [2024-12-15 05:55:43.632034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.632049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.887 [2024-12-15 05:55:43.632063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.632077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.887 [2024-12-15 05:55:43.632091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.632106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:124272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.632118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.632133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:124296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.632146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.632161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:124304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.632175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.632190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:124312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.632203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.632218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:124336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.632242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.632258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:124368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.632271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.632286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:124376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.632302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.632317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:124392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.632330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.632361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:124976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.632375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.632390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:124984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.632404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.632419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:124992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.632449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.632465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:125000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.887 [2024-12-15 05:55:43.632479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.632495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.887 [2024-12-15 05:55:43.632508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.887 [2024-12-15 05:55:43.632524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.888 [2024-12-15 05:55:43.632540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.632556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.888 [2024-12-15 05:55:43.632570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.632586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:125032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.888 [2024-12-15 05:55:43.632600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.632616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:125040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.888 [2024-12-15 05:55:43.632630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.632652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.888 [2024-12-15 05:55:43.632667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.632683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:125056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.888 [2024-12-15 05:55:43.632697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.632713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.888 [2024-12-15 05:55:43.632728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.632743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.888 [2024-12-15 05:55:43.632757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.632773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.888 [2024-12-15 05:55:43.632787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.632803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.888 [2024-12-15 05:55:43.632819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.632835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.888 [2024-12-15 05:55:43.632849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.632865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:124400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.888 [2024-12-15 05:55:43.632879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.632895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:124424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.888 [2024-12-15 05:55:43.632919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.632936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:124432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.888 [2024-12-15 05:55:43.632950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.632966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:124448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.888 [2024-12-15 05:55:43.632980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.632996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.888 [2024-12-15 05:55:43.633009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.633025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:124472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.888 [2024-12-15 05:55:43.633048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.633065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:124480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.888 [2024-12-15 05:55:43.633079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.633094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:124504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.888 [2024-12-15 05:55:43.633108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.633124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.888 [2024-12-15 05:55:43.633138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.633153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.888 [2024-12-15 05:55:43.633167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.633183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.888 [2024-12-15 05:55:43.633196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.633212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.888 [2024-12-15 05:55:43.633226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.633242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:125136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.888 [2024-12-15 05:55:43.633256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.633271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:125144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.888 [2024-12-15 05:55:43.633285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.633301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:124512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.888 [2024-12-15 05:55:43.633317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.633333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:124528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.888 [2024-12-15 05:55:43.633347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.633363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:124536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.888 [2024-12-15 05:55:43.633377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.633393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:124552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.888 [2024-12-15 05:55:43.633407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.633429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:124576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.888 [2024-12-15 05:55:43.633444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.633460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.888 [2024-12-15 05:55:43.633474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.633490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:124608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.888 [2024-12-15 05:55:43.633504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.633519] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ada40 is same with the state(5) to be set 00:15:36.888 [2024-12-15 05:55:43.633538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:36.888 [2024-12-15 05:55:43.633549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:36.888 [2024-12-15 05:55:43.633561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124632 len:8 PRP1 0x0 PRP2 0x0 00:15:36.888 [2024-12-15 05:55:43.633574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.633622] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x5ada40 was disconnected and freed. reset controller. 00:15:36.888 [2024-12-15 05:55:43.633640] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:36.888 [2024-12-15 05:55:43.633696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.888 [2024-12-15 05:55:43.633719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.633734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.888 [2024-12-15 05:55:43.633748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.633762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.888 [2024-12-15 05:55:43.633776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.633790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.888 [2024-12-15 05:55:43.633804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.888 [2024-12-15 05:55:43.633848] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:36.888 [2024-12-15 05:55:43.633888] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x579d40 (9): Bad file descriptor 00:15:36.888 [2024-12-15 05:55:43.636636] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:36.888 [2024-12-15 05:55:43.671277] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:36.888 [2024-12-15 05:55:47.204147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:121216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.889 [2024-12-15 05:55:47.204198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.204233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.889 [2024-12-15 05:55:47.204275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.204309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.889 [2024-12-15 05:55:47.204324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.204339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.889 [2024-12-15 05:55:47.204353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.204368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.889 [2024-12-15 05:55:47.204382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.204397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.889 [2024-12-15 05:55:47.204410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.204425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.889 [2024-12-15 05:55:47.204439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.204454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.889 [2024-12-15 05:55:47.204468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.204483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.889 [2024-12-15 05:55:47.204496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.204511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:121232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.889 [2024-12-15 05:55:47.204525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.204540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.889 [2024-12-15 05:55:47.204553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.204568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.889 [2024-12-15 05:55:47.204582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.204613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:121280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.889 [2024-12-15 05:55:47.204626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.204656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:121296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.889 [2024-12-15 05:55:47.204677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.204693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.889 [2024-12-15 05:55:47.204706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.204721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:121312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.889 [2024-12-15 05:55:47.204734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.204749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.889 [2024-12-15 05:55:47.204763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.204777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.889 [2024-12-15 05:55:47.204790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.204804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.889 [2024-12-15 05:55:47.204817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.204832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.889 [2024-12-15 05:55:47.204845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.204859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.889 [2024-12-15 05:55:47.204872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.204902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.889 [2024-12-15 05:55:47.204915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.204946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.889 [2024-12-15 05:55:47.204960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.204975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.889 [2024-12-15 05:55:47.205005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.205021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.889 [2024-12-15 05:55:47.205035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.205050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.889 [2024-12-15 05:55:47.205064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.205079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.889 [2024-12-15 05:55:47.205101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.205117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.889 [2024-12-15 05:55:47.205130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.205145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.889 [2024-12-15 05:55:47.205159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.205174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.889 [2024-12-15 05:55:47.205188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.205203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.889 [2024-12-15 05:55:47.205216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.205231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.889 [2024-12-15 05:55:47.205245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.205278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.889 [2024-12-15 05:55:47.205293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.889 [2024-12-15 05:55:47.205324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.890 [2024-12-15 05:55:47.205337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.205352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.890 [2024-12-15 05:55:47.205366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.205381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:121408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.890 [2024-12-15 05:55:47.205394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.205410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.890 [2024-12-15 05:55:47.205423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.205438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.890 [2024-12-15 05:55:47.205452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.205467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:121432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.890 [2024-12-15 05:55:47.205480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.205507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:121440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.890 [2024-12-15 05:55:47.205522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.205537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.890 [2024-12-15 05:55:47.205552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.205567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.890 [2024-12-15 05:55:47.205596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.205611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.890 [2024-12-15 05:55:47.205624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.205670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:121472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.890 [2024-12-15 05:55:47.205683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.205698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:121480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.890 [2024-12-15 05:55:47.205711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.205726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.890 [2024-12-15 05:55:47.205739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.205753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.890 [2024-12-15 05:55:47.205766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.205781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.890 [2024-12-15 05:55:47.205795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.205810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.890 [2024-12-15 05:55:47.205823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.205838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.890 [2024-12-15 05:55:47.205851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.205865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.890 [2024-12-15 05:55:47.205878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.205893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.890 [2024-12-15 05:55:47.205912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.205927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.890 [2024-12-15 05:55:47.205940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.205966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.890 [2024-12-15 05:55:47.205981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.205996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.890 [2024-12-15 05:55:47.206009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.206024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.890 [2024-12-15 05:55:47.206037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.206052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:121512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.890 [2024-12-15 05:55:47.206083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.206098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.890 [2024-12-15 05:55:47.206112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.206127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.890 [2024-12-15 05:55:47.206141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.206156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.890 [2024-12-15 05:55:47.206170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.206185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.890 [2024-12-15 05:55:47.206198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.206214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.890 [2024-12-15 05:55:47.206227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.206242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:121560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.890 [2024-12-15 05:55:47.206274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.206290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.890 [2024-12-15 05:55:47.206304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.206328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.890 [2024-12-15 05:55:47.206343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.206359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:121584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.890 [2024-12-15 05:55:47.206373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.206389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:121592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.890 [2024-12-15 05:55:47.206403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.206419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:121600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.890 [2024-12-15 05:55:47.206433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.206449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.890 [2024-12-15 05:55:47.206463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.206478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.890 [2024-12-15 05:55:47.206492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.206508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:121624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.890 [2024-12-15 05:55:47.206525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.206541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.890 [2024-12-15 05:55:47.206556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.206588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:121640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.890 [2024-12-15 05:55:47.206602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.890 [2024-12-15 05:55:47.206632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.890 [2024-12-15 05:55:47.206645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.206660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.891 [2024-12-15 05:55:47.206673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.206688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.891 [2024-12-15 05:55:47.206701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.206716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:120976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.891 [2024-12-15 05:55:47.206735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.206750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.891 [2024-12-15 05:55:47.206763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.206778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:121000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.891 [2024-12-15 05:55:47.206791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.206806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:121048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.891 [2024-12-15 05:55:47.206820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.206834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:121088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.891 [2024-12-15 05:55:47.206847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.206862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:121096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.891 [2024-12-15 05:55:47.206875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.206906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.891 [2024-12-15 05:55:47.206920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.206935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.891 [2024-12-15 05:55:47.206957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.206975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.891 [2024-12-15 05:55:47.206989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.891 [2024-12-15 05:55:47.207018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.891 [2024-12-15 05:55:47.207048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.891 [2024-12-15 05:55:47.207078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.891 [2024-12-15 05:55:47.207107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:121712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.891 [2024-12-15 05:55:47.207143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.891 [2024-12-15 05:55:47.207201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.891 [2024-12-15 05:55:47.207236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:121736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.891 [2024-12-15 05:55:47.207266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:121744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.891 [2024-12-15 05:55:47.207295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:121752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.891 [2024-12-15 05:55:47.207325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.891 [2024-12-15 05:55:47.207355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:121768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.891 [2024-12-15 05:55:47.207387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.891 [2024-12-15 05:55:47.207418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:121784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.891 [2024-12-15 05:55:47.207448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:121792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.891 [2024-12-15 05:55:47.207478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.891 [2024-12-15 05:55:47.207508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.891 [2024-12-15 05:55:47.207544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.891 [2024-12-15 05:55:47.207592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.891 [2024-12-15 05:55:47.207621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:121128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.891 [2024-12-15 05:55:47.207649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:121136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.891 [2024-12-15 05:55:47.207678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:121144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.891 [2024-12-15 05:55:47.207707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:121152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.891 [2024-12-15 05:55:47.207735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.891 [2024-12-15 05:55:47.207765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.891 [2024-12-15 05:55:47.207793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.891 [2024-12-15 05:55:47.207822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.891 [2024-12-15 05:55:47.207851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:121832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.891 [2024-12-15 05:55:47.207882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.891 [2024-12-15 05:55:47.207940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.891 [2024-12-15 05:55:47.207956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:121848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:47.207977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:47.207994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:121856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:47.208008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:47.208024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.892 [2024-12-15 05:55:47.208038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:47.208053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:121872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:47.208067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:47.208083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.892 [2024-12-15 05:55:47.208099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:47.208115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:121888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.892 [2024-12-15 05:55:47.208129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:47.208145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:121192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:47.208159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:47.208175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:121200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:47.208188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:47.208204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:47.208218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:47.208234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:47.208248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:47.208264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:121248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:47.208278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:47.208294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:47.208308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:47.208324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:121272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:47.208338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:47.208361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x598c00 is same with the state(5) to be set 00:15:36.892 [2024-12-15 05:55:47.208378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:36.892 [2024-12-15 05:55:47.208390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:36.892 [2024-12-15 05:55:47.208403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121288 len:8 PRP1 0x0 PRP2 0x0 00:15:36.892 [2024-12-15 05:55:47.208417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:47.208462] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x598c00 was disconnected and freed. reset controller. 00:15:36.892 [2024-12-15 05:55:47.208480] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:15:36.892 [2024-12-15 05:55:47.208534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.892 [2024-12-15 05:55:47.208555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:47.208571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.892 [2024-12-15 05:55:47.208584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:47.208599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.892 [2024-12-15 05:55:47.208612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:47.208626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.892 [2024-12-15 05:55:47.208640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:47.208657] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:36.892 [2024-12-15 05:55:47.208706] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x579d40 (9): Bad file descriptor 00:15:36.892 [2024-12-15 05:55:47.211192] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:36.892 [2024-12-15 05:55:47.241505] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:36.892 [2024-12-15 05:55:51.778658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:51.778719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:51.778749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:51.778765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:51.778781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:51.778796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:51.778811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:51.778824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:51.778860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:51.778875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:51.778908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:51.778922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:51.778938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:51.778951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:51.778982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:51.778998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:51.779014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:92288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:51.779028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:51.779043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:51.779057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:51.779073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:51.779086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:51.779102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:92912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:51.779116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:51.779131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:51.779145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:51.779170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:92928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:51.779185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:51.779201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:92936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:51.779215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:51.779231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:51.779245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:51.779260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:51.779291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:51.779308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:51.779322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:51.779338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:51.779353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.892 [2024-12-15 05:55:51.779369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:92320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.892 [2024-12-15 05:55:51.779383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.779399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:92336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.893 [2024-12-15 05:55:51.779413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.779429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.893 [2024-12-15 05:55:51.779443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.779458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:92360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.893 [2024-12-15 05:55:51.779487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.779503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.893 [2024-12-15 05:55:51.779516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.779546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.893 [2024-12-15 05:55:51.779576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.779591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.893 [2024-12-15 05:55:51.779605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.779620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.893 [2024-12-15 05:55:51.779634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.779649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.893 [2024-12-15 05:55:51.779662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.779678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.893 [2024-12-15 05:55:51.779691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.779745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.893 [2024-12-15 05:55:51.779761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.779776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.893 [2024-12-15 05:55:51.779790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.779805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.893 [2024-12-15 05:55:51.779818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.779834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.893 [2024-12-15 05:55:51.779848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.779864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.893 [2024-12-15 05:55:51.779878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.779911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.893 [2024-12-15 05:55:51.779925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.779956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.893 [2024-12-15 05:55:51.779972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.779988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.893 [2024-12-15 05:55:51.780002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.780018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.893 [2024-12-15 05:55:51.780032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.780048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.893 [2024-12-15 05:55:51.780062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.780078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.893 [2024-12-15 05:55:51.780093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.780109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.893 [2024-12-15 05:55:51.780123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.780139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.893 [2024-12-15 05:55:51.780161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.780178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.893 [2024-12-15 05:55:51.780192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.780208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.893 [2024-12-15 05:55:51.780222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.780238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.893 [2024-12-15 05:55:51.780251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.780267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.893 [2024-12-15 05:55:51.780281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.780313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.893 [2024-12-15 05:55:51.780326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.780342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.893 [2024-12-15 05:55:51.780355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.780371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.893 [2024-12-15 05:55:51.780384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.780400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:92544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.893 [2024-12-15 05:55:51.780413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.780428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.893 [2024-12-15 05:55:51.780442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.780458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.893 [2024-12-15 05:55:51.780471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.780487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.893 [2024-12-15 05:55:51.780500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.780515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:93152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.893 [2024-12-15 05:55:51.780529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.780544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:93160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.893 [2024-12-15 05:55:51.780580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.780596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.893 [2024-12-15 05:55:51.780609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.780624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.893 [2024-12-15 05:55:51.780637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.780652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.893 [2024-12-15 05:55:51.780665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.780680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.893 [2024-12-15 05:55:51.780693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.780707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.893 [2024-12-15 05:55:51.780720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.893 [2024-12-15 05:55:51.780735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.894 [2024-12-15 05:55:51.780748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.780763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.894 [2024-12-15 05:55:51.780776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.780791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:93224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.894 [2024-12-15 05:55:51.780804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.780819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.894 [2024-12-15 05:55:51.780832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.780847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.894 [2024-12-15 05:55:51.780860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.780875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.894 [2024-12-15 05:55:51.780905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.780932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.894 [2024-12-15 05:55:51.780964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.780988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:92560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.894 [2024-12-15 05:55:51.781002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.781019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.894 [2024-12-15 05:55:51.781033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.781049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:92584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.894 [2024-12-15 05:55:51.781062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.781078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.894 [2024-12-15 05:55:51.781096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.781112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.894 [2024-12-15 05:55:51.781126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.781142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.894 [2024-12-15 05:55:51.781155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.781171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.894 [2024-12-15 05:55:51.781185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.781201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.894 [2024-12-15 05:55:51.781215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.781232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.894 [2024-12-15 05:55:51.781246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.781277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.894 [2024-12-15 05:55:51.781306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.781320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.894 [2024-12-15 05:55:51.781333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.781348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.894 [2024-12-15 05:55:51.781361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.781376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.894 [2024-12-15 05:55:51.781395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.781411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.894 [2024-12-15 05:55:51.781425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.781440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.894 [2024-12-15 05:55:51.781453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.781468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.894 [2024-12-15 05:55:51.781481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.781496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.894 [2024-12-15 05:55:51.781509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.781524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.894 [2024-12-15 05:55:51.781537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.781552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.894 [2024-12-15 05:55:51.781565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.781580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.894 [2024-12-15 05:55:51.781594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.781610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.894 [2024-12-15 05:55:51.781623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.781638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.894 [2024-12-15 05:55:51.781651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.781665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.894 [2024-12-15 05:55:51.781679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.781694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.894 [2024-12-15 05:55:51.781707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.781722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.894 [2024-12-15 05:55:51.781735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.781756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.894 [2024-12-15 05:55:51.781771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.781786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.894 [2024-12-15 05:55:51.781799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.781814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.894 [2024-12-15 05:55:51.781827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.781842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:92720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.894 [2024-12-15 05:55:51.781855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.894 [2024-12-15 05:55:51.781870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:92744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.895 [2024-12-15 05:55:51.781900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.781916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.895 [2024-12-15 05:55:51.781939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.781957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.895 [2024-12-15 05:55:51.781970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.781986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.895 [2024-12-15 05:55:51.781999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.782015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.895 [2024-12-15 05:55:51.782028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.782043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.895 [2024-12-15 05:55:51.782057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.782072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.895 [2024-12-15 05:55:51.782087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.782103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.895 [2024-12-15 05:55:51.782117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.782150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.895 [2024-12-15 05:55:51.782164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.782186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.895 [2024-12-15 05:55:51.782201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.782217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.895 [2024-12-15 05:55:51.782231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.782262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.895 [2024-12-15 05:55:51.782276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.782292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.895 [2024-12-15 05:55:51.782305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.782320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.895 [2024-12-15 05:55:51.782334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.782349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.895 [2024-12-15 05:55:51.782363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.782378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:93488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.895 [2024-12-15 05:55:51.782392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.782407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.895 [2024-12-15 05:55:51.782421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.782437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:93504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.895 [2024-12-15 05:55:51.782450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.782466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.895 [2024-12-15 05:55:51.782479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.782494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.895 [2024-12-15 05:55:51.782508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.782523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.895 [2024-12-15 05:55:51.782537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.782553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:93536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.895 [2024-12-15 05:55:51.782573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.782589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.895 [2024-12-15 05:55:51.782604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.782620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:93552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:36.895 [2024-12-15 05:55:51.782633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.782648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.895 [2024-12-15 05:55:51.782662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.782677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.895 [2024-12-15 05:55:51.782691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.782706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:92880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.895 [2024-12-15 05:55:51.782720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.782736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.895 [2024-12-15 05:55:51.782750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.782765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.895 [2024-12-15 05:55:51.782779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.782794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.895 [2024-12-15 05:55:51.782807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.782823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.895 [2024-12-15 05:55:51.782836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.782851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x57c970 is same with the state(5) to be set 00:15:36.895 [2024-12-15 05:55:51.782868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:36.895 [2024-12-15 05:55:51.782879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:36.895 [2024-12-15 05:55:51.782907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92976 len:8 PRP1 0x0 PRP2 0x0 00:15:36.895 [2024-12-15 05:55:51.782930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.782978] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x57c970 was disconnected and freed. reset controller. 00:15:36.895 [2024-12-15 05:55:51.782996] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:15:36.895 [2024-12-15 05:55:51.783060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.895 [2024-12-15 05:55:51.783082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.783097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.895 [2024-12-15 05:55:51.783111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.783126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.895 [2024-12-15 05:55:51.783139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.783154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.895 [2024-12-15 05:55:51.783178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.895 [2024-12-15 05:55:51.783196] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:36.895 [2024-12-15 05:55:51.783230] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x579d40 (9): Bad file descriptor 00:15:36.895 [2024-12-15 05:55:51.785880] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:36.895 [2024-12-15 05:55:51.813075] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:36.895 00:15:36.895 Latency(us) 00:15:36.895 [2024-12-15T05:55:58.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.895 [2024-12-15T05:55:58.536Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:36.895 Verification LBA range: start 0x0 length 0x4000 00:15:36.896 NVMe0n1 : 15.01 13351.54 52.15 302.04 0.00 9356.64 480.35 15490.33 00:15:36.896 [2024-12-15T05:55:58.537Z] =================================================================================================================== 00:15:36.896 [2024-12-15T05:55:58.537Z] Total : 13351.54 52.15 302.04 0.00 9356.64 480.35 15490.33 00:15:36.896 Received shutdown signal, test time was about 15.000000 seconds 00:15:36.896 00:15:36.896 Latency(us) 00:15:36.896 [2024-12-15T05:55:58.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.896 [2024-12-15T05:55:58.537Z] =================================================================================================================== 00:15:36.896 [2024-12-15T05:55:58.537Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:36.896 05:55:57 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:36.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:36.896 05:55:57 -- host/failover.sh@65 -- # count=3 00:15:36.896 05:55:57 -- host/failover.sh@67 -- # (( count != 3 )) 00:15:36.896 05:55:57 -- host/failover.sh@73 -- # bdevperf_pid=81871 00:15:36.896 05:55:57 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:36.896 05:55:57 -- host/failover.sh@75 -- # waitforlisten 81871 /var/tmp/bdevperf.sock 00:15:36.896 05:55:57 -- common/autotest_common.sh@829 -- # '[' -z 81871 ']' 00:15:36.896 05:55:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:36.896 05:55:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:36.896 05:55:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:36.896 05:55:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:36.896 05:55:57 -- common/autotest_common.sh@10 -- # set +x 00:15:37.155 05:55:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:37.155 05:55:58 -- common/autotest_common.sh@862 -- # return 0 00:15:37.155 05:55:58 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:37.414 [2024-12-15 05:55:58.943292] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:37.414 05:55:58 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:37.673 [2024-12-15 05:55:59.211646] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:37.673 05:55:59 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:37.932 NVMe0n1 00:15:37.932 05:55:59 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:38.191 00:15:38.450 05:55:59 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:38.709 00:15:38.709 05:56:00 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:38.709 05:56:00 -- host/failover.sh@82 -- # grep -q NVMe0 00:15:38.968 05:56:00 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:39.227 05:56:00 -- host/failover.sh@87 -- # sleep 3 00:15:42.543 05:56:03 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:42.543 05:56:03 -- host/failover.sh@88 -- # grep -q NVMe0 00:15:42.543 05:56:03 -- host/failover.sh@90 -- # run_test_pid=81948 00:15:42.543 05:56:03 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:42.543 05:56:03 -- host/failover.sh@92 -- # wait 81948 00:15:43.479 0 00:15:43.479 05:56:05 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:43.479 [2024-12-15 05:55:57.748363] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:43.479 [2024-12-15 05:55:57.748486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81871 ] 00:15:43.479 [2024-12-15 05:55:57.879153] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.479 [2024-12-15 05:55:57.911889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.479 [2024-12-15 05:56:00.629757] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:43.479 [2024-12-15 05:56:00.629902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.479 [2024-12-15 05:56:00.629945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.479 [2024-12-15 05:56:00.629965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.479 [2024-12-15 05:56:00.629980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.479 [2024-12-15 05:56:00.629994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.479 [2024-12-15 05:56:00.630008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.479 [2024-12-15 05:56:00.630022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.479 [2024-12-15 05:56:00.630037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.479 [2024-12-15 05:56:00.630051] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:43.479 [2024-12-15 05:56:00.630101] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:43.479 [2024-12-15 05:56:00.630132] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6ed40 (9): Bad file descriptor 00:15:43.479 [2024-12-15 05:56:00.633575] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:43.479 Running I/O for 1 seconds... 00:15:43.479 00:15:43.479 Latency(us) 00:15:43.479 [2024-12-15T05:56:05.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.479 [2024-12-15T05:56:05.120Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:43.479 Verification LBA range: start 0x0 length 0x4000 00:15:43.479 NVMe0n1 : 1.01 13510.77 52.78 0.00 0.00 9429.04 1064.96 12511.42 00:15:43.479 [2024-12-15T05:56:05.120Z] =================================================================================================================== 00:15:43.479 [2024-12-15T05:56:05.120Z] Total : 13510.77 52.78 0.00 0.00 9429.04 1064.96 12511.42 00:15:43.479 05:56:05 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:43.479 05:56:05 -- host/failover.sh@95 -- # grep -q NVMe0 00:15:43.737 05:56:05 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:43.995 05:56:05 -- host/failover.sh@99 -- # grep -q NVMe0 00:15:43.995 05:56:05 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:44.254 05:56:05 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:44.821 05:56:06 -- host/failover.sh@101 -- # sleep 3 00:15:48.107 05:56:09 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:48.107 05:56:09 -- host/failover.sh@103 -- # grep -q NVMe0 00:15:48.107 05:56:09 -- host/failover.sh@108 -- # killprocess 81871 00:15:48.107 05:56:09 -- common/autotest_common.sh@936 -- # '[' -z 81871 ']' 00:15:48.107 05:56:09 -- common/autotest_common.sh@940 -- # kill -0 81871 00:15:48.107 05:56:09 -- common/autotest_common.sh@941 -- # uname 00:15:48.107 05:56:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:48.107 05:56:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81871 00:15:48.107 killing process with pid 81871 00:15:48.107 05:56:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:48.107 05:56:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:48.107 05:56:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81871' 00:15:48.107 05:56:09 -- common/autotest_common.sh@955 -- # kill 81871 00:15:48.107 05:56:09 -- common/autotest_common.sh@960 -- # wait 81871 00:15:48.107 05:56:09 -- host/failover.sh@110 -- # sync 00:15:48.107 05:56:09 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:48.367 05:56:09 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:48.367 05:56:09 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:48.367 05:56:09 -- host/failover.sh@116 -- # nvmftestfini 00:15:48.367 05:56:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:48.367 05:56:09 -- nvmf/common.sh@116 -- # sync 00:15:48.367 05:56:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:48.367 05:56:09 -- nvmf/common.sh@119 -- # set +e 00:15:48.367 05:56:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:48.367 05:56:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:48.367 rmmod nvme_tcp 00:15:48.367 rmmod nvme_fabrics 00:15:48.367 rmmod nvme_keyring 00:15:48.367 05:56:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:48.626 05:56:10 -- nvmf/common.sh@123 -- # set -e 00:15:48.626 05:56:10 -- nvmf/common.sh@124 -- # return 0 00:15:48.626 05:56:10 -- nvmf/common.sh@477 -- # '[' -n 81606 ']' 00:15:48.626 05:56:10 -- nvmf/common.sh@478 -- # killprocess 81606 00:15:48.626 05:56:10 -- common/autotest_common.sh@936 -- # '[' -z 81606 ']' 00:15:48.626 05:56:10 -- common/autotest_common.sh@940 -- # kill -0 81606 00:15:48.626 05:56:10 -- common/autotest_common.sh@941 -- # uname 00:15:48.626 05:56:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:48.626 05:56:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81606 00:15:48.626 killing process with pid 81606 00:15:48.626 05:56:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:48.626 05:56:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:48.626 05:56:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81606' 00:15:48.626 05:56:10 -- common/autotest_common.sh@955 -- # kill 81606 00:15:48.626 05:56:10 -- common/autotest_common.sh@960 -- # wait 81606 00:15:48.626 05:56:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:48.626 05:56:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:48.626 05:56:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:48.626 05:56:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:48.626 05:56:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:48.626 05:56:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.626 05:56:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.626 05:56:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.626 05:56:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:48.626 00:15:48.626 real 0m32.855s 00:15:48.626 user 2m8.025s 00:15:48.626 sys 0m5.296s 00:15:48.626 05:56:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:48.626 05:56:10 -- common/autotest_common.sh@10 -- # set +x 00:15:48.626 ************************************ 00:15:48.626 END TEST nvmf_failover 00:15:48.626 ************************************ 00:15:48.886 05:56:10 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:48.886 05:56:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:48.886 05:56:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:48.886 05:56:10 -- common/autotest_common.sh@10 -- # set +x 00:15:48.886 ************************************ 00:15:48.886 START TEST nvmf_discovery 00:15:48.886 ************************************ 00:15:48.886 05:56:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:48.886 * Looking for test storage... 00:15:48.886 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:48.886 05:56:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:48.886 05:56:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:48.886 05:56:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:48.886 05:56:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:48.886 05:56:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:48.886 05:56:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:48.886 05:56:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:48.886 05:56:10 -- scripts/common.sh@335 -- # IFS=.-: 00:15:48.886 05:56:10 -- scripts/common.sh@335 -- # read -ra ver1 00:15:48.886 05:56:10 -- scripts/common.sh@336 -- # IFS=.-: 00:15:48.886 05:56:10 -- scripts/common.sh@336 -- # read -ra ver2 00:15:48.886 05:56:10 -- scripts/common.sh@337 -- # local 'op=<' 00:15:48.886 05:56:10 -- scripts/common.sh@339 -- # ver1_l=2 00:15:48.886 05:56:10 -- scripts/common.sh@340 -- # ver2_l=1 00:15:48.886 05:56:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:48.886 05:56:10 -- scripts/common.sh@343 -- # case "$op" in 00:15:48.886 05:56:10 -- scripts/common.sh@344 -- # : 1 00:15:48.886 05:56:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:48.886 05:56:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.886 05:56:10 -- scripts/common.sh@364 -- # decimal 1 00:15:48.886 05:56:10 -- scripts/common.sh@352 -- # local d=1 00:15:48.886 05:56:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:48.886 05:56:10 -- scripts/common.sh@354 -- # echo 1 00:15:48.886 05:56:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:48.886 05:56:10 -- scripts/common.sh@365 -- # decimal 2 00:15:48.886 05:56:10 -- scripts/common.sh@352 -- # local d=2 00:15:48.886 05:56:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:48.886 05:56:10 -- scripts/common.sh@354 -- # echo 2 00:15:48.886 05:56:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:48.886 05:56:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:48.886 05:56:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:48.886 05:56:10 -- scripts/common.sh@367 -- # return 0 00:15:48.886 05:56:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:48.886 05:56:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:48.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.886 --rc genhtml_branch_coverage=1 00:15:48.886 --rc genhtml_function_coverage=1 00:15:48.886 --rc genhtml_legend=1 00:15:48.886 --rc geninfo_all_blocks=1 00:15:48.886 --rc geninfo_unexecuted_blocks=1 00:15:48.886 00:15:48.886 ' 00:15:48.886 05:56:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:48.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.886 --rc genhtml_branch_coverage=1 00:15:48.886 --rc genhtml_function_coverage=1 00:15:48.886 --rc genhtml_legend=1 00:15:48.886 --rc geninfo_all_blocks=1 00:15:48.886 --rc geninfo_unexecuted_blocks=1 00:15:48.886 00:15:48.886 ' 00:15:48.886 05:56:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:48.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.886 --rc genhtml_branch_coverage=1 00:15:48.886 --rc genhtml_function_coverage=1 00:15:48.886 --rc genhtml_legend=1 00:15:48.886 --rc geninfo_all_blocks=1 00:15:48.886 --rc geninfo_unexecuted_blocks=1 00:15:48.886 00:15:48.886 ' 00:15:48.886 05:56:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:48.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.886 --rc genhtml_branch_coverage=1 00:15:48.886 --rc genhtml_function_coverage=1 00:15:48.886 --rc genhtml_legend=1 00:15:48.886 --rc geninfo_all_blocks=1 00:15:48.886 --rc geninfo_unexecuted_blocks=1 00:15:48.886 00:15:48.886 ' 00:15:48.886 05:56:10 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:48.886 05:56:10 -- nvmf/common.sh@7 -- # uname -s 00:15:48.886 05:56:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.886 05:56:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.886 05:56:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.886 05:56:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.886 05:56:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.886 05:56:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.886 05:56:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.886 05:56:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.886 05:56:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.886 05:56:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.886 05:56:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:15:48.886 05:56:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:15:48.886 05:56:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.886 05:56:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.886 05:56:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:48.886 05:56:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:48.886 05:56:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.886 05:56:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.886 05:56:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.886 05:56:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.886 05:56:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.886 05:56:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.887 05:56:10 -- paths/export.sh@5 -- # export PATH 00:15:48.887 05:56:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.887 05:56:10 -- nvmf/common.sh@46 -- # : 0 00:15:48.887 05:56:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:48.887 05:56:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:48.887 05:56:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:48.887 05:56:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.887 05:56:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.887 05:56:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:48.887 05:56:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:48.887 05:56:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:48.887 05:56:10 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:48.887 05:56:10 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:48.887 05:56:10 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:48.887 05:56:10 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:48.887 05:56:10 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:48.887 05:56:10 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:48.887 05:56:10 -- host/discovery.sh@25 -- # nvmftestinit 00:15:48.887 05:56:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:48.887 05:56:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.887 05:56:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:48.887 05:56:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:48.887 05:56:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:48.887 05:56:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.887 05:56:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.887 05:56:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.887 05:56:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:48.887 05:56:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:48.887 05:56:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:48.887 05:56:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:48.887 05:56:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:48.887 05:56:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:48.887 05:56:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:48.887 05:56:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:48.887 05:56:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:48.887 05:56:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:48.887 05:56:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:48.887 05:56:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:48.887 05:56:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:48.887 05:56:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:48.887 05:56:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:48.887 05:56:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:48.887 05:56:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:48.887 05:56:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:48.887 05:56:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:48.887 05:56:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:48.887 Cannot find device "nvmf_tgt_br" 00:15:48.887 05:56:10 -- nvmf/common.sh@154 -- # true 00:15:48.887 05:56:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:48.887 Cannot find device "nvmf_tgt_br2" 00:15:48.887 05:56:10 -- nvmf/common.sh@155 -- # true 00:15:48.887 05:56:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:49.150 05:56:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:49.150 Cannot find device "nvmf_tgt_br" 00:15:49.150 05:56:10 -- nvmf/common.sh@157 -- # true 00:15:49.150 05:56:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:49.150 Cannot find device "nvmf_tgt_br2" 00:15:49.150 05:56:10 -- nvmf/common.sh@158 -- # true 00:15:49.150 05:56:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:49.150 05:56:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:49.150 05:56:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:49.150 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:49.150 05:56:10 -- nvmf/common.sh@161 -- # true 00:15:49.150 05:56:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:49.150 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:49.150 05:56:10 -- nvmf/common.sh@162 -- # true 00:15:49.150 05:56:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:49.150 05:56:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:49.150 05:56:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:49.150 05:56:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:49.150 05:56:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:49.150 05:56:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:49.150 05:56:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:49.150 05:56:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:49.150 05:56:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:49.150 05:56:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:49.150 05:56:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:49.150 05:56:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:49.150 05:56:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:49.150 05:56:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:49.150 05:56:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:49.150 05:56:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:49.150 05:56:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:49.150 05:56:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:49.150 05:56:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:49.150 05:56:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:49.150 05:56:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:49.150 05:56:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:49.150 05:56:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:49.150 05:56:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:49.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:15:49.150 00:15:49.150 --- 10.0.0.2 ping statistics --- 00:15:49.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.150 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:49.150 05:56:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:49.150 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:49.150 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:15:49.150 00:15:49.150 --- 10.0.0.3 ping statistics --- 00:15:49.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.150 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:49.150 05:56:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:49.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:49.150 00:15:49.150 --- 10.0.0.1 ping statistics --- 00:15:49.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.150 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:49.408 05:56:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.408 05:56:10 -- nvmf/common.sh@421 -- # return 0 00:15:49.408 05:56:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:49.408 05:56:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.408 05:56:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:49.408 05:56:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:49.408 05:56:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.408 05:56:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:49.408 05:56:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:49.408 05:56:10 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:49.408 05:56:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:49.408 05:56:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:49.408 05:56:10 -- common/autotest_common.sh@10 -- # set +x 00:15:49.408 05:56:10 -- nvmf/common.sh@469 -- # nvmfpid=82219 00:15:49.408 05:56:10 -- nvmf/common.sh@470 -- # waitforlisten 82219 00:15:49.408 05:56:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:49.408 05:56:10 -- common/autotest_common.sh@829 -- # '[' -z 82219 ']' 00:15:49.408 05:56:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.408 05:56:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:49.408 05:56:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.408 05:56:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:49.408 05:56:10 -- common/autotest_common.sh@10 -- # set +x 00:15:49.408 [2024-12-15 05:56:10.862336] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:49.408 [2024-12-15 05:56:10.862452] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.408 [2024-12-15 05:56:11.001233] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.408 [2024-12-15 05:56:11.038537] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:49.408 [2024-12-15 05:56:11.038695] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.408 [2024-12-15 05:56:11.038708] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.408 [2024-12-15 05:56:11.038716] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.408 [2024-12-15 05:56:11.038738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.345 05:56:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:50.345 05:56:11 -- common/autotest_common.sh@862 -- # return 0 00:15:50.345 05:56:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:50.345 05:56:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:50.345 05:56:11 -- common/autotest_common.sh@10 -- # set +x 00:15:50.345 05:56:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:50.345 05:56:11 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:50.345 05:56:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.345 05:56:11 -- common/autotest_common.sh@10 -- # set +x 00:15:50.345 [2024-12-15 05:56:11.873000] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:50.345 05:56:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.345 05:56:11 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:15:50.345 05:56:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.345 05:56:11 -- common/autotest_common.sh@10 -- # set +x 00:15:50.345 [2024-12-15 05:56:11.881133] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:50.345 05:56:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.345 05:56:11 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:50.345 05:56:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.345 05:56:11 -- common/autotest_common.sh@10 -- # set +x 00:15:50.345 null0 00:15:50.345 05:56:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.345 05:56:11 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:50.345 05:56:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.345 05:56:11 -- common/autotest_common.sh@10 -- # set +x 00:15:50.345 null1 00:15:50.345 05:56:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.345 05:56:11 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:50.345 05:56:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.345 05:56:11 -- common/autotest_common.sh@10 -- # set +x 00:15:50.345 05:56:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.345 05:56:11 -- host/discovery.sh@45 -- # hostpid=82257 00:15:50.345 05:56:11 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:50.345 05:56:11 -- host/discovery.sh@46 -- # waitforlisten 82257 /tmp/host.sock 00:15:50.345 05:56:11 -- common/autotest_common.sh@829 -- # '[' -z 82257 ']' 00:15:50.345 05:56:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:15:50.345 05:56:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:50.345 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:50.345 05:56:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:50.345 05:56:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:50.345 05:56:11 -- common/autotest_common.sh@10 -- # set +x 00:15:50.345 [2024-12-15 05:56:11.957678] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:50.345 [2024-12-15 05:56:11.957782] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82257 ] 00:15:50.605 [2024-12-15 05:56:12.096890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.605 [2024-12-15 05:56:12.135486] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:50.605 [2024-12-15 05:56:12.135732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.542 05:56:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:51.542 05:56:12 -- common/autotest_common.sh@862 -- # return 0 00:15:51.542 05:56:12 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:51.542 05:56:12 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:51.542 05:56:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.542 05:56:12 -- common/autotest_common.sh@10 -- # set +x 00:15:51.542 05:56:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.542 05:56:12 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:51.542 05:56:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.542 05:56:12 -- common/autotest_common.sh@10 -- # set +x 00:15:51.542 05:56:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.542 05:56:12 -- host/discovery.sh@72 -- # notify_id=0 00:15:51.542 05:56:12 -- host/discovery.sh@78 -- # get_subsystem_names 00:15:51.542 05:56:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:51.542 05:56:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:51.542 05:56:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.542 05:56:12 -- common/autotest_common.sh@10 -- # set +x 00:15:51.542 05:56:12 -- host/discovery.sh@59 -- # sort 00:15:51.542 05:56:12 -- host/discovery.sh@59 -- # xargs 00:15:51.542 05:56:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.542 05:56:13 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:15:51.542 05:56:13 -- host/discovery.sh@79 -- # get_bdev_list 00:15:51.542 05:56:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:51.542 05:56:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.542 05:56:13 -- common/autotest_common.sh@10 -- # set +x 00:15:51.542 05:56:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:51.542 05:56:13 -- host/discovery.sh@55 -- # sort 00:15:51.542 05:56:13 -- host/discovery.sh@55 -- # xargs 00:15:51.542 05:56:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.542 05:56:13 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:15:51.542 05:56:13 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:51.542 05:56:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.542 05:56:13 -- common/autotest_common.sh@10 -- # set +x 00:15:51.542 05:56:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.542 05:56:13 -- host/discovery.sh@82 -- # get_subsystem_names 00:15:51.542 05:56:13 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:51.542 05:56:13 -- host/discovery.sh@59 -- # sort 00:15:51.542 05:56:13 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:51.542 05:56:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.542 05:56:13 -- common/autotest_common.sh@10 -- # set +x 00:15:51.542 05:56:13 -- host/discovery.sh@59 -- # xargs 00:15:51.542 05:56:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.542 05:56:13 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:15:51.542 05:56:13 -- host/discovery.sh@83 -- # get_bdev_list 00:15:51.542 05:56:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:51.542 05:56:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.542 05:56:13 -- common/autotest_common.sh@10 -- # set +x 00:15:51.542 05:56:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:51.542 05:56:13 -- host/discovery.sh@55 -- # sort 00:15:51.542 05:56:13 -- host/discovery.sh@55 -- # xargs 00:15:51.542 05:56:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.801 05:56:13 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:51.801 05:56:13 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:51.801 05:56:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.801 05:56:13 -- common/autotest_common.sh@10 -- # set +x 00:15:51.801 05:56:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.801 05:56:13 -- host/discovery.sh@86 -- # get_subsystem_names 00:15:51.801 05:56:13 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:51.801 05:56:13 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:51.801 05:56:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.801 05:56:13 -- common/autotest_common.sh@10 -- # set +x 00:15:51.801 05:56:13 -- host/discovery.sh@59 -- # sort 00:15:51.801 05:56:13 -- host/discovery.sh@59 -- # xargs 00:15:51.801 05:56:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.801 05:56:13 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:15:51.801 05:56:13 -- host/discovery.sh@87 -- # get_bdev_list 00:15:51.801 05:56:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:51.801 05:56:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:51.801 05:56:13 -- host/discovery.sh@55 -- # sort 00:15:51.801 05:56:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.801 05:56:13 -- common/autotest_common.sh@10 -- # set +x 00:15:51.801 05:56:13 -- host/discovery.sh@55 -- # xargs 00:15:51.801 05:56:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.801 05:56:13 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:51.801 05:56:13 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:51.801 05:56:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.801 05:56:13 -- common/autotest_common.sh@10 -- # set +x 00:15:51.801 [2024-12-15 05:56:13.341595] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:51.801 05:56:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.801 05:56:13 -- host/discovery.sh@92 -- # get_subsystem_names 00:15:51.801 05:56:13 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:51.801 05:56:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.801 05:56:13 -- common/autotest_common.sh@10 -- # set +x 00:15:51.801 05:56:13 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:51.801 05:56:13 -- host/discovery.sh@59 -- # sort 00:15:51.801 05:56:13 -- host/discovery.sh@59 -- # xargs 00:15:51.801 05:56:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.801 05:56:13 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:51.801 05:56:13 -- host/discovery.sh@93 -- # get_bdev_list 00:15:51.801 05:56:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:51.801 05:56:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.801 05:56:13 -- common/autotest_common.sh@10 -- # set +x 00:15:51.801 05:56:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:51.801 05:56:13 -- host/discovery.sh@55 -- # sort 00:15:51.801 05:56:13 -- host/discovery.sh@55 -- # xargs 00:15:51.801 05:56:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.061 05:56:13 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:15:52.061 05:56:13 -- host/discovery.sh@94 -- # get_notification_count 00:15:52.061 05:56:13 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:52.061 05:56:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.061 05:56:13 -- common/autotest_common.sh@10 -- # set +x 00:15:52.061 05:56:13 -- host/discovery.sh@74 -- # jq '. | length' 00:15:52.061 05:56:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.061 05:56:13 -- host/discovery.sh@74 -- # notification_count=0 00:15:52.061 05:56:13 -- host/discovery.sh@75 -- # notify_id=0 00:15:52.061 05:56:13 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:15:52.061 05:56:13 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:52.061 05:56:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.061 05:56:13 -- common/autotest_common.sh@10 -- # set +x 00:15:52.061 05:56:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.061 05:56:13 -- host/discovery.sh@100 -- # sleep 1 00:15:52.628 [2024-12-15 05:56:13.985880] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:52.628 [2024-12-15 05:56:13.985935] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:52.628 [2024-12-15 05:56:13.985953] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:52.628 [2024-12-15 05:56:13.991936] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:52.628 [2024-12-15 05:56:14.047674] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:52.628 [2024-12-15 05:56:14.047702] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:52.888 05:56:14 -- host/discovery.sh@101 -- # get_subsystem_names 00:15:52.888 05:56:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:52.888 05:56:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:52.888 05:56:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.888 05:56:14 -- common/autotest_common.sh@10 -- # set +x 00:15:52.888 05:56:14 -- host/discovery.sh@59 -- # sort 00:15:52.888 05:56:14 -- host/discovery.sh@59 -- # xargs 00:15:53.151 05:56:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.151 05:56:14 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.151 05:56:14 -- host/discovery.sh@102 -- # get_bdev_list 00:15:53.151 05:56:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:53.151 05:56:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:53.151 05:56:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.151 05:56:14 -- host/discovery.sh@55 -- # sort 00:15:53.151 05:56:14 -- common/autotest_common.sh@10 -- # set +x 00:15:53.151 05:56:14 -- host/discovery.sh@55 -- # xargs 00:15:53.151 05:56:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.151 05:56:14 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:53.151 05:56:14 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:15:53.151 05:56:14 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:53.151 05:56:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.151 05:56:14 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:53.151 05:56:14 -- common/autotest_common.sh@10 -- # set +x 00:15:53.151 05:56:14 -- host/discovery.sh@63 -- # sort -n 00:15:53.151 05:56:14 -- host/discovery.sh@63 -- # xargs 00:15:53.151 05:56:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.151 05:56:14 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:15:53.151 05:56:14 -- host/discovery.sh@104 -- # get_notification_count 00:15:53.151 05:56:14 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:53.151 05:56:14 -- host/discovery.sh@74 -- # jq '. | length' 00:15:53.151 05:56:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.151 05:56:14 -- common/autotest_common.sh@10 -- # set +x 00:15:53.151 05:56:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.151 05:56:14 -- host/discovery.sh@74 -- # notification_count=1 00:15:53.151 05:56:14 -- host/discovery.sh@75 -- # notify_id=1 00:15:53.151 05:56:14 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:15:53.151 05:56:14 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:53.151 05:56:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.151 05:56:14 -- common/autotest_common.sh@10 -- # set +x 00:15:53.151 05:56:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.151 05:56:14 -- host/discovery.sh@109 -- # sleep 1 00:15:54.526 05:56:15 -- host/discovery.sh@110 -- # get_bdev_list 00:15:54.526 05:56:15 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:54.526 05:56:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.526 05:56:15 -- common/autotest_common.sh@10 -- # set +x 00:15:54.526 05:56:15 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:54.526 05:56:15 -- host/discovery.sh@55 -- # sort 00:15:54.526 05:56:15 -- host/discovery.sh@55 -- # xargs 00:15:54.526 05:56:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.526 05:56:15 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:54.526 05:56:15 -- host/discovery.sh@111 -- # get_notification_count 00:15:54.526 05:56:15 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:54.526 05:56:15 -- host/discovery.sh@74 -- # jq '. | length' 00:15:54.526 05:56:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.526 05:56:15 -- common/autotest_common.sh@10 -- # set +x 00:15:54.526 05:56:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.526 05:56:15 -- host/discovery.sh@74 -- # notification_count=1 00:15:54.526 05:56:15 -- host/discovery.sh@75 -- # notify_id=2 00:15:54.526 05:56:15 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:15:54.526 05:56:15 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:15:54.526 05:56:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.526 05:56:15 -- common/autotest_common.sh@10 -- # set +x 00:15:54.526 [2024-12-15 05:56:15.877492] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:54.526 [2024-12-15 05:56:15.878328] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:54.526 [2024-12-15 05:56:15.878384] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:54.526 05:56:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.526 05:56:15 -- host/discovery.sh@117 -- # sleep 1 00:15:54.526 [2024-12-15 05:56:15.884298] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:15:54.526 [2024-12-15 05:56:15.949625] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:54.526 [2024-12-15 05:56:15.949653] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:54.526 [2024-12-15 05:56:15.949675] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:55.462 05:56:16 -- host/discovery.sh@118 -- # get_subsystem_names 00:15:55.462 05:56:16 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:55.462 05:56:16 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:55.462 05:56:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.462 05:56:16 -- common/autotest_common.sh@10 -- # set +x 00:15:55.462 05:56:16 -- host/discovery.sh@59 -- # sort 00:15:55.462 05:56:16 -- host/discovery.sh@59 -- # xargs 00:15:55.462 05:56:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.462 05:56:16 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.462 05:56:16 -- host/discovery.sh@119 -- # get_bdev_list 00:15:55.462 05:56:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:55.462 05:56:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:55.462 05:56:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.462 05:56:16 -- host/discovery.sh@55 -- # sort 00:15:55.462 05:56:16 -- common/autotest_common.sh@10 -- # set +x 00:15:55.462 05:56:16 -- host/discovery.sh@55 -- # xargs 00:15:55.463 05:56:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.463 05:56:17 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:55.463 05:56:17 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:15:55.463 05:56:17 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:55.463 05:56:17 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:55.463 05:56:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.463 05:56:17 -- host/discovery.sh@63 -- # sort -n 00:15:55.463 05:56:17 -- common/autotest_common.sh@10 -- # set +x 00:15:55.463 05:56:17 -- host/discovery.sh@63 -- # xargs 00:15:55.463 05:56:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.463 05:56:17 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:15:55.463 05:56:17 -- host/discovery.sh@121 -- # get_notification_count 00:15:55.463 05:56:17 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:55.463 05:56:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.463 05:56:17 -- common/autotest_common.sh@10 -- # set +x 00:15:55.463 05:56:17 -- host/discovery.sh@74 -- # jq '. | length' 00:15:55.463 05:56:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.721 05:56:17 -- host/discovery.sh@74 -- # notification_count=0 00:15:55.721 05:56:17 -- host/discovery.sh@75 -- # notify_id=2 00:15:55.721 05:56:17 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:15:55.721 05:56:17 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:55.721 05:56:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.721 05:56:17 -- common/autotest_common.sh@10 -- # set +x 00:15:55.721 [2024-12-15 05:56:17.115733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.721 [2024-12-15 05:56:17.115778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.721 [2024-12-15 05:56:17.115794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.721 [2024-12-15 05:56:17.115804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.721 [2024-12-15 05:56:17.115814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.721 [2024-12-15 05:56:17.115824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.721 [2024-12-15 05:56:17.115834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.721 [2024-12-15 05:56:17.115843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.721 [2024-12-15 05:56:17.115853] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2437150 is same with the state(5) to be set 00:15:55.721 [2024-12-15 05:56:17.115943] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:55.721 [2024-12-15 05:56:17.115965] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:55.721 05:56:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.721 05:56:17 -- host/discovery.sh@127 -- # sleep 1 00:15:55.721 [2024-12-15 05:56:17.121951] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:15:55.721 [2024-12-15 05:56:17.121995] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:55.721 [2024-12-15 05:56:17.122054] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2437150 (9): Bad file descriptor 00:15:56.656 05:56:18 -- host/discovery.sh@128 -- # get_subsystem_names 00:15:56.656 05:56:18 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:56.656 05:56:18 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:56.656 05:56:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.656 05:56:18 -- host/discovery.sh@59 -- # sort 00:15:56.656 05:56:18 -- common/autotest_common.sh@10 -- # set +x 00:15:56.656 05:56:18 -- host/discovery.sh@59 -- # xargs 00:15:56.656 05:56:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.656 05:56:18 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.656 05:56:18 -- host/discovery.sh@129 -- # get_bdev_list 00:15:56.656 05:56:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:56.656 05:56:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:56.656 05:56:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.656 05:56:18 -- common/autotest_common.sh@10 -- # set +x 00:15:56.656 05:56:18 -- host/discovery.sh@55 -- # sort 00:15:56.656 05:56:18 -- host/discovery.sh@55 -- # xargs 00:15:56.656 05:56:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.656 05:56:18 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:56.656 05:56:18 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:15:56.656 05:56:18 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:56.656 05:56:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.656 05:56:18 -- common/autotest_common.sh@10 -- # set +x 00:15:56.656 05:56:18 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:56.656 05:56:18 -- host/discovery.sh@63 -- # sort -n 00:15:56.656 05:56:18 -- host/discovery.sh@63 -- # xargs 00:15:56.656 05:56:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.656 05:56:18 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:15:56.656 05:56:18 -- host/discovery.sh@131 -- # get_notification_count 00:15:56.656 05:56:18 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:56.656 05:56:18 -- host/discovery.sh@74 -- # jq '. | length' 00:15:56.656 05:56:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.656 05:56:18 -- common/autotest_common.sh@10 -- # set +x 00:15:56.915 05:56:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.915 05:56:18 -- host/discovery.sh@74 -- # notification_count=0 00:15:56.915 05:56:18 -- host/discovery.sh@75 -- # notify_id=2 00:15:56.915 05:56:18 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:15:56.915 05:56:18 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:15:56.915 05:56:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.915 05:56:18 -- common/autotest_common.sh@10 -- # set +x 00:15:56.915 05:56:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.915 05:56:18 -- host/discovery.sh@135 -- # sleep 1 00:15:57.850 05:56:19 -- host/discovery.sh@136 -- # get_subsystem_names 00:15:57.850 05:56:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:57.850 05:56:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.850 05:56:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:57.850 05:56:19 -- common/autotest_common.sh@10 -- # set +x 00:15:57.850 05:56:19 -- host/discovery.sh@59 -- # sort 00:15:57.850 05:56:19 -- host/discovery.sh@59 -- # xargs 00:15:57.850 05:56:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.850 05:56:19 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:15:57.850 05:56:19 -- host/discovery.sh@137 -- # get_bdev_list 00:15:57.850 05:56:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:57.850 05:56:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.850 05:56:19 -- common/autotest_common.sh@10 -- # set +x 00:15:57.850 05:56:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:57.850 05:56:19 -- host/discovery.sh@55 -- # sort 00:15:57.850 05:56:19 -- host/discovery.sh@55 -- # xargs 00:15:57.850 05:56:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.850 05:56:19 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:15:57.850 05:56:19 -- host/discovery.sh@138 -- # get_notification_count 00:15:57.850 05:56:19 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:57.850 05:56:19 -- host/discovery.sh@74 -- # jq '. | length' 00:15:57.850 05:56:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.850 05:56:19 -- common/autotest_common.sh@10 -- # set +x 00:15:57.850 05:56:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.108 05:56:19 -- host/discovery.sh@74 -- # notification_count=2 00:15:58.108 05:56:19 -- host/discovery.sh@75 -- # notify_id=4 00:15:58.108 05:56:19 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:15:58.108 05:56:19 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:58.108 05:56:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.108 05:56:19 -- common/autotest_common.sh@10 -- # set +x 00:15:59.044 [2024-12-15 05:56:20.538623] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:59.044 [2024-12-15 05:56:20.538653] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:59.044 [2024-12-15 05:56:20.538685] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:59.044 [2024-12-15 05:56:20.544662] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:15:59.044 [2024-12-15 05:56:20.603737] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:59.044 [2024-12-15 05:56:20.603793] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:59.044 05:56:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.044 05:56:20 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:59.044 05:56:20 -- common/autotest_common.sh@650 -- # local es=0 00:15:59.044 05:56:20 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:59.044 05:56:20 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:59.044 05:56:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:59.044 05:56:20 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:59.044 05:56:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:59.044 05:56:20 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:59.044 05:56:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.044 05:56:20 -- common/autotest_common.sh@10 -- # set +x 00:15:59.044 request: 00:15:59.044 { 00:15:59.044 "name": "nvme", 00:15:59.044 "trtype": "tcp", 00:15:59.044 "traddr": "10.0.0.2", 00:15:59.044 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:59.044 "adrfam": "ipv4", 00:15:59.044 "trsvcid": "8009", 00:15:59.044 "wait_for_attach": true, 00:15:59.044 "method": "bdev_nvme_start_discovery", 00:15:59.044 "req_id": 1 00:15:59.044 } 00:15:59.044 Got JSON-RPC error response 00:15:59.044 response: 00:15:59.044 { 00:15:59.044 "code": -17, 00:15:59.044 "message": "File exists" 00:15:59.044 } 00:15:59.044 05:56:20 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:59.044 05:56:20 -- common/autotest_common.sh@653 -- # es=1 00:15:59.044 05:56:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:59.044 05:56:20 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:59.044 05:56:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:59.044 05:56:20 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:15:59.044 05:56:20 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:59.044 05:56:20 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:59.044 05:56:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.044 05:56:20 -- common/autotest_common.sh@10 -- # set +x 00:15:59.044 05:56:20 -- host/discovery.sh@67 -- # sort 00:15:59.044 05:56:20 -- host/discovery.sh@67 -- # xargs 00:15:59.044 05:56:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.044 05:56:20 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:15:59.044 05:56:20 -- host/discovery.sh@147 -- # get_bdev_list 00:15:59.303 05:56:20 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:59.304 05:56:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.304 05:56:20 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:59.304 05:56:20 -- common/autotest_common.sh@10 -- # set +x 00:15:59.304 05:56:20 -- host/discovery.sh@55 -- # sort 00:15:59.304 05:56:20 -- host/discovery.sh@55 -- # xargs 00:15:59.304 05:56:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.304 05:56:20 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:59.304 05:56:20 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:59.304 05:56:20 -- common/autotest_common.sh@650 -- # local es=0 00:15:59.304 05:56:20 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:59.304 05:56:20 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:59.304 05:56:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:59.304 05:56:20 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:59.304 05:56:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:59.304 05:56:20 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:59.304 05:56:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.304 05:56:20 -- common/autotest_common.sh@10 -- # set +x 00:15:59.304 request: 00:15:59.304 { 00:15:59.304 "name": "nvme_second", 00:15:59.304 "trtype": "tcp", 00:15:59.304 "traddr": "10.0.0.2", 00:15:59.304 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:59.304 "adrfam": "ipv4", 00:15:59.304 "trsvcid": "8009", 00:15:59.304 "wait_for_attach": true, 00:15:59.304 "method": "bdev_nvme_start_discovery", 00:15:59.304 "req_id": 1 00:15:59.304 } 00:15:59.304 Got JSON-RPC error response 00:15:59.304 response: 00:15:59.304 { 00:15:59.304 "code": -17, 00:15:59.304 "message": "File exists" 00:15:59.304 } 00:15:59.304 05:56:20 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:59.304 05:56:20 -- common/autotest_common.sh@653 -- # es=1 00:15:59.304 05:56:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:59.304 05:56:20 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:59.304 05:56:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:59.304 05:56:20 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:15:59.304 05:56:20 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:59.304 05:56:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.304 05:56:20 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:59.304 05:56:20 -- common/autotest_common.sh@10 -- # set +x 00:15:59.304 05:56:20 -- host/discovery.sh@67 -- # sort 00:15:59.304 05:56:20 -- host/discovery.sh@67 -- # xargs 00:15:59.304 05:56:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.304 05:56:20 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:15:59.304 05:56:20 -- host/discovery.sh@153 -- # get_bdev_list 00:15:59.304 05:56:20 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:59.304 05:56:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.304 05:56:20 -- common/autotest_common.sh@10 -- # set +x 00:15:59.304 05:56:20 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:59.304 05:56:20 -- host/discovery.sh@55 -- # sort 00:15:59.304 05:56:20 -- host/discovery.sh@55 -- # xargs 00:15:59.304 05:56:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.304 05:56:20 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:59.304 05:56:20 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:59.304 05:56:20 -- common/autotest_common.sh@650 -- # local es=0 00:15:59.304 05:56:20 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:59.304 05:56:20 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:59.304 05:56:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:59.304 05:56:20 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:59.304 05:56:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:59.304 05:56:20 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:59.304 05:56:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.304 05:56:20 -- common/autotest_common.sh@10 -- # set +x 00:16:00.240 [2024-12-15 05:56:21.865910] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:00.240 [2024-12-15 05:56:21.866054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:00.240 [2024-12-15 05:56:21.866096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:00.240 [2024-12-15 05:56:21.866112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2476300 with addr=10.0.0.2, port=8010 00:16:00.240 [2024-12-15 05:56:21.866129] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:00.240 [2024-12-15 05:56:21.866138] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:00.240 [2024-12-15 05:56:21.866147] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:01.615 [2024-12-15 05:56:22.865863] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:01.615 [2024-12-15 05:56:22.865984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:01.615 [2024-12-15 05:56:22.866023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:01.615 [2024-12-15 05:56:22.866038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2476300 with addr=10.0.0.2, port=8010 00:16:01.615 [2024-12-15 05:56:22.866053] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:01.615 [2024-12-15 05:56:22.866062] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:01.615 [2024-12-15 05:56:22.866072] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:02.552 [2024-12-15 05:56:23.865750] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:16:02.552 request: 00:16:02.552 { 00:16:02.552 "name": "nvme_second", 00:16:02.552 "trtype": "tcp", 00:16:02.552 "traddr": "10.0.0.2", 00:16:02.552 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:02.552 "adrfam": "ipv4", 00:16:02.552 "trsvcid": "8010", 00:16:02.552 "attach_timeout_ms": 3000, 00:16:02.552 "method": "bdev_nvme_start_discovery", 00:16:02.552 "req_id": 1 00:16:02.552 } 00:16:02.552 Got JSON-RPC error response 00:16:02.552 response: 00:16:02.552 { 00:16:02.552 "code": -110, 00:16:02.552 "message": "Connection timed out" 00:16:02.552 } 00:16:02.552 05:56:23 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:02.552 05:56:23 -- common/autotest_common.sh@653 -- # es=1 00:16:02.552 05:56:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:02.552 05:56:23 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:02.552 05:56:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:02.552 05:56:23 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:16:02.552 05:56:23 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:02.552 05:56:23 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:02.552 05:56:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.552 05:56:23 -- host/discovery.sh@67 -- # xargs 00:16:02.552 05:56:23 -- common/autotest_common.sh@10 -- # set +x 00:16:02.552 05:56:23 -- host/discovery.sh@67 -- # sort 00:16:02.552 05:56:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.552 05:56:23 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:16:02.552 05:56:23 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:16:02.552 05:56:23 -- host/discovery.sh@162 -- # kill 82257 00:16:02.552 05:56:23 -- host/discovery.sh@163 -- # nvmftestfini 00:16:02.552 05:56:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:02.552 05:56:23 -- nvmf/common.sh@116 -- # sync 00:16:02.552 05:56:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:02.552 05:56:23 -- nvmf/common.sh@119 -- # set +e 00:16:02.552 05:56:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:02.552 05:56:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:02.552 rmmod nvme_tcp 00:16:02.552 rmmod nvme_fabrics 00:16:02.552 rmmod nvme_keyring 00:16:02.552 05:56:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:02.552 05:56:24 -- nvmf/common.sh@123 -- # set -e 00:16:02.552 05:56:24 -- nvmf/common.sh@124 -- # return 0 00:16:02.552 05:56:24 -- nvmf/common.sh@477 -- # '[' -n 82219 ']' 00:16:02.552 05:56:24 -- nvmf/common.sh@478 -- # killprocess 82219 00:16:02.552 05:56:24 -- common/autotest_common.sh@936 -- # '[' -z 82219 ']' 00:16:02.552 05:56:24 -- common/autotest_common.sh@940 -- # kill -0 82219 00:16:02.552 05:56:24 -- common/autotest_common.sh@941 -- # uname 00:16:02.552 05:56:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:02.552 05:56:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82219 00:16:02.552 05:56:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:02.552 05:56:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:02.552 killing process with pid 82219 00:16:02.552 05:56:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82219' 00:16:02.552 05:56:24 -- common/autotest_common.sh@955 -- # kill 82219 00:16:02.552 05:56:24 -- common/autotest_common.sh@960 -- # wait 82219 00:16:02.811 05:56:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:02.811 05:56:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:02.811 05:56:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:02.811 05:56:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:02.811 05:56:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:02.811 05:56:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.811 05:56:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:02.811 05:56:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.811 05:56:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:02.811 00:16:02.811 real 0m13.961s 00:16:02.811 user 0m26.930s 00:16:02.811 sys 0m2.113s 00:16:02.811 05:56:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:02.811 05:56:24 -- common/autotest_common.sh@10 -- # set +x 00:16:02.811 ************************************ 00:16:02.811 END TEST nvmf_discovery 00:16:02.811 ************************************ 00:16:02.811 05:56:24 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:02.811 05:56:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:02.811 05:56:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:02.811 05:56:24 -- common/autotest_common.sh@10 -- # set +x 00:16:02.811 ************************************ 00:16:02.811 START TEST nvmf_discovery_remove_ifc 00:16:02.811 ************************************ 00:16:02.811 05:56:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:02.811 * Looking for test storage... 00:16:02.811 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:02.811 05:56:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:02.811 05:56:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:02.811 05:56:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:02.811 05:56:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:02.811 05:56:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:02.811 05:56:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:02.811 05:56:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:02.811 05:56:24 -- scripts/common.sh@335 -- # IFS=.-: 00:16:02.811 05:56:24 -- scripts/common.sh@335 -- # read -ra ver1 00:16:02.811 05:56:24 -- scripts/common.sh@336 -- # IFS=.-: 00:16:02.811 05:56:24 -- scripts/common.sh@336 -- # read -ra ver2 00:16:02.811 05:56:24 -- scripts/common.sh@337 -- # local 'op=<' 00:16:02.811 05:56:24 -- scripts/common.sh@339 -- # ver1_l=2 00:16:02.811 05:56:24 -- scripts/common.sh@340 -- # ver2_l=1 00:16:02.811 05:56:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:02.811 05:56:24 -- scripts/common.sh@343 -- # case "$op" in 00:16:02.811 05:56:24 -- scripts/common.sh@344 -- # : 1 00:16:02.811 05:56:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:02.811 05:56:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:03.070 05:56:24 -- scripts/common.sh@364 -- # decimal 1 00:16:03.070 05:56:24 -- scripts/common.sh@352 -- # local d=1 00:16:03.070 05:56:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:03.070 05:56:24 -- scripts/common.sh@354 -- # echo 1 00:16:03.070 05:56:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:03.070 05:56:24 -- scripts/common.sh@365 -- # decimal 2 00:16:03.070 05:56:24 -- scripts/common.sh@352 -- # local d=2 00:16:03.070 05:56:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:03.070 05:56:24 -- scripts/common.sh@354 -- # echo 2 00:16:03.070 05:56:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:03.070 05:56:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:03.070 05:56:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:03.070 05:56:24 -- scripts/common.sh@367 -- # return 0 00:16:03.070 05:56:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:03.070 05:56:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:03.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.070 --rc genhtml_branch_coverage=1 00:16:03.070 --rc genhtml_function_coverage=1 00:16:03.070 --rc genhtml_legend=1 00:16:03.070 --rc geninfo_all_blocks=1 00:16:03.070 --rc geninfo_unexecuted_blocks=1 00:16:03.070 00:16:03.070 ' 00:16:03.070 05:56:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:03.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.070 --rc genhtml_branch_coverage=1 00:16:03.070 --rc genhtml_function_coverage=1 00:16:03.070 --rc genhtml_legend=1 00:16:03.070 --rc geninfo_all_blocks=1 00:16:03.070 --rc geninfo_unexecuted_blocks=1 00:16:03.070 00:16:03.070 ' 00:16:03.070 05:56:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:03.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.070 --rc genhtml_branch_coverage=1 00:16:03.070 --rc genhtml_function_coverage=1 00:16:03.070 --rc genhtml_legend=1 00:16:03.070 --rc geninfo_all_blocks=1 00:16:03.070 --rc geninfo_unexecuted_blocks=1 00:16:03.070 00:16:03.070 ' 00:16:03.070 05:56:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:03.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.070 --rc genhtml_branch_coverage=1 00:16:03.070 --rc genhtml_function_coverage=1 00:16:03.070 --rc genhtml_legend=1 00:16:03.070 --rc geninfo_all_blocks=1 00:16:03.070 --rc geninfo_unexecuted_blocks=1 00:16:03.070 00:16:03.070 ' 00:16:03.070 05:56:24 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:03.070 05:56:24 -- nvmf/common.sh@7 -- # uname -s 00:16:03.070 05:56:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:03.070 05:56:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:03.070 05:56:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:03.070 05:56:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:03.070 05:56:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:03.070 05:56:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:03.070 05:56:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:03.070 05:56:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:03.070 05:56:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:03.070 05:56:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:03.070 05:56:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:16:03.070 05:56:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:16:03.070 05:56:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:03.070 05:56:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:03.070 05:56:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:03.070 05:56:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:03.070 05:56:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:03.070 05:56:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:03.070 05:56:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:03.070 05:56:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.070 05:56:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.070 05:56:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.070 05:56:24 -- paths/export.sh@5 -- # export PATH 00:16:03.070 05:56:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.070 05:56:24 -- nvmf/common.sh@46 -- # : 0 00:16:03.070 05:56:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:03.070 05:56:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:03.070 05:56:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:03.070 05:56:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:03.070 05:56:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:03.070 05:56:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:03.070 05:56:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:03.070 05:56:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:03.070 05:56:24 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:03.070 05:56:24 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:03.070 05:56:24 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:03.070 05:56:24 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:03.070 05:56:24 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:03.070 05:56:24 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:03.070 05:56:24 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:03.070 05:56:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:03.070 05:56:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:03.070 05:56:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:03.070 05:56:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:03.070 05:56:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:03.070 05:56:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.070 05:56:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:03.070 05:56:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.070 05:56:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:03.070 05:56:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:03.070 05:56:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:03.070 05:56:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:03.070 05:56:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:03.070 05:56:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:03.070 05:56:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:03.070 05:56:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:03.070 05:56:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:03.070 05:56:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:03.070 05:56:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:03.070 05:56:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:03.070 05:56:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:03.070 05:56:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:03.070 05:56:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:03.070 05:56:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:03.070 05:56:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:03.070 05:56:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:03.070 05:56:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:03.070 05:56:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:03.070 Cannot find device "nvmf_tgt_br" 00:16:03.070 05:56:24 -- nvmf/common.sh@154 -- # true 00:16:03.070 05:56:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:03.070 Cannot find device "nvmf_tgt_br2" 00:16:03.070 05:56:24 -- nvmf/common.sh@155 -- # true 00:16:03.070 05:56:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:03.070 05:56:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:03.070 Cannot find device "nvmf_tgt_br" 00:16:03.070 05:56:24 -- nvmf/common.sh@157 -- # true 00:16:03.070 05:56:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:03.070 Cannot find device "nvmf_tgt_br2" 00:16:03.070 05:56:24 -- nvmf/common.sh@158 -- # true 00:16:03.070 05:56:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:03.070 05:56:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:03.070 05:56:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:03.070 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:03.070 05:56:24 -- nvmf/common.sh@161 -- # true 00:16:03.070 05:56:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:03.070 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:03.070 05:56:24 -- nvmf/common.sh@162 -- # true 00:16:03.070 05:56:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:03.070 05:56:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:03.070 05:56:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:03.070 05:56:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:03.070 05:56:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:03.070 05:56:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:03.070 05:56:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:03.070 05:56:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:03.070 05:56:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:03.070 05:56:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:03.070 05:56:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:03.070 05:56:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:03.070 05:56:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:03.329 05:56:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:03.329 05:56:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:03.329 05:56:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:03.329 05:56:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:03.329 05:56:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:03.329 05:56:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:03.329 05:56:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:03.329 05:56:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:03.329 05:56:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:03.329 05:56:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:03.329 05:56:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:03.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:03.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:16:03.329 00:16:03.329 --- 10.0.0.2 ping statistics --- 00:16:03.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.329 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:16:03.329 05:56:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:03.329 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:03.329 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:16:03.329 00:16:03.329 --- 10.0.0.3 ping statistics --- 00:16:03.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.329 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:16:03.329 05:56:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:03.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:03.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:03.329 00:16:03.329 --- 10.0.0.1 ping statistics --- 00:16:03.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.329 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:03.329 05:56:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:03.329 05:56:24 -- nvmf/common.sh@421 -- # return 0 00:16:03.329 05:56:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:03.329 05:56:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:03.329 05:56:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:03.329 05:56:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:03.329 05:56:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:03.329 05:56:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:03.329 05:56:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:03.329 05:56:24 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:03.329 05:56:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:03.329 05:56:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:03.329 05:56:24 -- common/autotest_common.sh@10 -- # set +x 00:16:03.329 05:56:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:03.329 05:56:24 -- nvmf/common.sh@469 -- # nvmfpid=82753 00:16:03.329 05:56:24 -- nvmf/common.sh@470 -- # waitforlisten 82753 00:16:03.329 05:56:24 -- common/autotest_common.sh@829 -- # '[' -z 82753 ']' 00:16:03.329 05:56:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.329 05:56:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:03.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.329 05:56:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.329 05:56:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:03.329 05:56:24 -- common/autotest_common.sh@10 -- # set +x 00:16:03.329 [2024-12-15 05:56:24.872693] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:03.329 [2024-12-15 05:56:24.872796] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.588 [2024-12-15 05:56:25.010457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.588 [2024-12-15 05:56:25.050259] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:03.588 [2024-12-15 05:56:25.050432] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.588 [2024-12-15 05:56:25.050447] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.588 [2024-12-15 05:56:25.050458] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.588 [2024-12-15 05:56:25.050487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.530 05:56:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:04.530 05:56:25 -- common/autotest_common.sh@862 -- # return 0 00:16:04.530 05:56:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:04.530 05:56:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:04.530 05:56:25 -- common/autotest_common.sh@10 -- # set +x 00:16:04.530 05:56:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.530 05:56:25 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:04.530 05:56:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.530 05:56:25 -- common/autotest_common.sh@10 -- # set +x 00:16:04.530 [2024-12-15 05:56:25.947940] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:04.530 [2024-12-15 05:56:25.956053] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:04.530 null0 00:16:04.530 [2024-12-15 05:56:25.988033] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:04.530 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:04.530 05:56:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.530 05:56:26 -- host/discovery_remove_ifc.sh@59 -- # hostpid=82785 00:16:04.530 05:56:26 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:04.530 05:56:26 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 82785 /tmp/host.sock 00:16:04.530 05:56:26 -- common/autotest_common.sh@829 -- # '[' -z 82785 ']' 00:16:04.530 05:56:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:16:04.530 05:56:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:04.530 05:56:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:04.530 05:56:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:04.530 05:56:26 -- common/autotest_common.sh@10 -- # set +x 00:16:04.530 [2024-12-15 05:56:26.051084] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:04.530 [2024-12-15 05:56:26.051393] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82785 ] 00:16:04.789 [2024-12-15 05:56:26.183441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.789 [2024-12-15 05:56:26.222522] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:04.789 [2024-12-15 05:56:26.222897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.789 05:56:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:04.789 05:56:26 -- common/autotest_common.sh@862 -- # return 0 00:16:04.789 05:56:26 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:04.789 05:56:26 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:04.789 05:56:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.789 05:56:26 -- common/autotest_common.sh@10 -- # set +x 00:16:04.789 05:56:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.789 05:56:26 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:04.789 05:56:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.789 05:56:26 -- common/autotest_common.sh@10 -- # set +x 00:16:04.789 05:56:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.789 05:56:26 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:04.789 05:56:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.789 05:56:26 -- common/autotest_common.sh@10 -- # set +x 00:16:06.167 [2024-12-15 05:56:27.370833] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:06.167 [2024-12-15 05:56:27.370861] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:06.167 [2024-12-15 05:56:27.370892] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:06.167 [2024-12-15 05:56:27.376886] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:06.167 [2024-12-15 05:56:27.432766] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:06.167 [2024-12-15 05:56:27.432991] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:06.167 [2024-12-15 05:56:27.433064] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:06.167 [2024-12-15 05:56:27.433229] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:06.167 [2024-12-15 05:56:27.433311] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:06.167 05:56:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.167 05:56:27 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:06.167 05:56:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:06.167 [2024-12-15 05:56:27.439494] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2332af0 was disconnected and freed. delete nvme_qpair. 00:16:06.167 05:56:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:06.167 05:56:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:06.167 05:56:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.167 05:56:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:06.167 05:56:27 -- common/autotest_common.sh@10 -- # set +x 00:16:06.167 05:56:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:06.167 05:56:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.167 05:56:27 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:06.167 05:56:27 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:16:06.167 05:56:27 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:06.167 05:56:27 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:06.167 05:56:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:06.167 05:56:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:06.167 05:56:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:06.167 05:56:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:06.167 05:56:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.167 05:56:27 -- common/autotest_common.sh@10 -- # set +x 00:16:06.167 05:56:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:06.167 05:56:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.167 05:56:27 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:06.167 05:56:27 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:07.104 05:56:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:07.104 05:56:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:07.104 05:56:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.104 05:56:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:07.104 05:56:28 -- common/autotest_common.sh@10 -- # set +x 00:16:07.104 05:56:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:07.104 05:56:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:07.104 05:56:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.104 05:56:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:07.104 05:56:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:08.040 05:56:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:08.040 05:56:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:08.040 05:56:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.040 05:56:29 -- common/autotest_common.sh@10 -- # set +x 00:16:08.040 05:56:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:08.040 05:56:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:08.040 05:56:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:08.040 05:56:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.299 05:56:29 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:08.299 05:56:29 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:09.235 05:56:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:09.235 05:56:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:09.235 05:56:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:09.235 05:56:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.235 05:56:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:09.235 05:56:30 -- common/autotest_common.sh@10 -- # set +x 00:16:09.235 05:56:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:09.235 05:56:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.235 05:56:30 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:09.235 05:56:30 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:10.171 05:56:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:10.171 05:56:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:10.171 05:56:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.171 05:56:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:10.171 05:56:31 -- common/autotest_common.sh@10 -- # set +x 00:16:10.171 05:56:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:10.171 05:56:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:10.171 05:56:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.171 05:56:31 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:10.171 05:56:31 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:11.549 05:56:32 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:11.549 05:56:32 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:11.549 05:56:32 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:11.549 05:56:32 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:11.549 05:56:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.549 05:56:32 -- common/autotest_common.sh@10 -- # set +x 00:16:11.549 05:56:32 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:11.549 05:56:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.549 [2024-12-15 05:56:32.860879] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:11.549 [2024-12-15 05:56:32.861153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.549 [2024-12-15 05:56:32.861289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.549 [2024-12-15 05:56:32.861307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.549 [2024-12-15 05:56:32.861317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.549 [2024-12-15 05:56:32.861327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.549 [2024-12-15 05:56:32.861337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.549 [2024-12-15 05:56:32.861347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.549 [2024-12-15 05:56:32.861356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.549 [2024-12-15 05:56:32.861366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.549 [2024-12-15 05:56:32.861375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.549 [2024-12-15 05:56:32.861385] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f7890 is same with the state(5) to be set 00:16:11.549 05:56:32 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:11.549 05:56:32 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:11.549 [2024-12-15 05:56:32.870879] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f7890 (9): Bad file descriptor 00:16:11.549 [2024-12-15 05:56:32.880930] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:12.485 05:56:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:12.485 05:56:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:12.485 05:56:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:12.485 05:56:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.485 05:56:33 -- common/autotest_common.sh@10 -- # set +x 00:16:12.485 05:56:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:12.485 05:56:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:12.485 [2024-12-15 05:56:33.906997] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:13.421 [2024-12-15 05:56:34.931040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:16:14.356 [2024-12-15 05:56:35.955016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:16:14.356 [2024-12-15 05:56:35.955189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f7890 with addr=10.0.0.2, port=4420 00:16:14.356 [2024-12-15 05:56:35.955230] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f7890 is same with the state(5) to be set 00:16:14.356 [2024-12-15 05:56:35.955286] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:14.356 [2024-12-15 05:56:35.955311] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:14.356 [2024-12-15 05:56:35.955330] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:14.356 [2024-12-15 05:56:35.955350] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:16:14.356 [2024-12-15 05:56:35.956737] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f7890 (9): Bad file descriptor 00:16:14.356 [2024-12-15 05:56:35.957174] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:14.356 [2024-12-15 05:56:35.957343] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:16:14.356 [2024-12-15 05:56:35.957429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.356 [2024-12-15 05:56:35.957474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.356 [2024-12-15 05:56:35.957502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.356 [2024-12-15 05:56:35.957525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.356 [2024-12-15 05:56:35.957546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.356 [2024-12-15 05:56:35.957565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.356 [2024-12-15 05:56:35.957586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.356 [2024-12-15 05:56:35.957605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.356 [2024-12-15 05:56:35.957627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.356 [2024-12-15 05:56:35.957646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.356 [2024-12-15 05:56:35.957666] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:16:14.356 [2024-12-15 05:56:35.957700] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f6ef0 (9): Bad file descriptor 00:16:14.356 [2024-12-15 05:56:35.958441] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:14.356 [2024-12-15 05:56:35.958761] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:16:14.356 05:56:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.356 05:56:35 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:14.356 05:56:35 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:15.771 05:56:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:15.771 05:56:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:15.771 05:56:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:15.771 05:56:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.771 05:56:36 -- common/autotest_common.sh@10 -- # set +x 00:16:15.771 05:56:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:15.771 05:56:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:15.771 05:56:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.771 05:56:37 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:15.771 05:56:37 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:15.771 05:56:37 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:15.771 05:56:37 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:15.771 05:56:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:15.771 05:56:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:15.771 05:56:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:15.771 05:56:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.771 05:56:37 -- common/autotest_common.sh@10 -- # set +x 00:16:15.771 05:56:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:15.771 05:56:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:15.771 05:56:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.771 05:56:37 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:15.771 05:56:37 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:16.366 [2024-12-15 05:56:37.966258] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:16.366 [2024-12-15 05:56:37.966295] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:16.366 [2024-12-15 05:56:37.966312] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:16.366 [2024-12-15 05:56:37.972343] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:16:16.625 [2024-12-15 05:56:38.027282] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:16.625 [2024-12-15 05:56:38.027328] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:16.625 [2024-12-15 05:56:38.027351] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:16.625 [2024-12-15 05:56:38.027366] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:16:16.625 [2024-12-15 05:56:38.027375] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:16.625 [2024-12-15 05:56:38.034675] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x22e6e30 was disconnected and freed. delete nvme_qpair. 00:16:16.625 05:56:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:16.625 05:56:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:16.625 05:56:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:16.625 05:56:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.625 05:56:38 -- common/autotest_common.sh@10 -- # set +x 00:16:16.625 05:56:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:16.625 05:56:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:16.625 05:56:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.625 05:56:38 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:16.625 05:56:38 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:16.625 05:56:38 -- host/discovery_remove_ifc.sh@90 -- # killprocess 82785 00:16:16.625 05:56:38 -- common/autotest_common.sh@936 -- # '[' -z 82785 ']' 00:16:16.625 05:56:38 -- common/autotest_common.sh@940 -- # kill -0 82785 00:16:16.625 05:56:38 -- common/autotest_common.sh@941 -- # uname 00:16:16.625 05:56:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:16.625 05:56:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82785 00:16:16.625 killing process with pid 82785 00:16:16.625 05:56:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:16.625 05:56:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:16.625 05:56:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82785' 00:16:16.625 05:56:38 -- common/autotest_common.sh@955 -- # kill 82785 00:16:16.625 05:56:38 -- common/autotest_common.sh@960 -- # wait 82785 00:16:16.885 05:56:38 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:16.885 05:56:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:16.885 05:56:38 -- nvmf/common.sh@116 -- # sync 00:16:16.885 05:56:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:16.885 05:56:38 -- nvmf/common.sh@119 -- # set +e 00:16:16.885 05:56:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:16.885 05:56:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:16.885 rmmod nvme_tcp 00:16:16.885 rmmod nvme_fabrics 00:16:16.885 rmmod nvme_keyring 00:16:16.885 05:56:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:16.885 05:56:38 -- nvmf/common.sh@123 -- # set -e 00:16:16.885 05:56:38 -- nvmf/common.sh@124 -- # return 0 00:16:16.885 05:56:38 -- nvmf/common.sh@477 -- # '[' -n 82753 ']' 00:16:16.885 05:56:38 -- nvmf/common.sh@478 -- # killprocess 82753 00:16:16.885 05:56:38 -- common/autotest_common.sh@936 -- # '[' -z 82753 ']' 00:16:16.885 05:56:38 -- common/autotest_common.sh@940 -- # kill -0 82753 00:16:16.885 05:56:38 -- common/autotest_common.sh@941 -- # uname 00:16:16.885 05:56:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:16.885 05:56:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82753 00:16:16.885 killing process with pid 82753 00:16:16.885 05:56:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:16.885 05:56:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:16.885 05:56:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82753' 00:16:16.885 05:56:38 -- common/autotest_common.sh@955 -- # kill 82753 00:16:16.885 05:56:38 -- common/autotest_common.sh@960 -- # wait 82753 00:16:17.144 05:56:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:17.144 05:56:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:17.144 05:56:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:17.144 05:56:38 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:17.144 05:56:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:17.144 05:56:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.144 05:56:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:17.144 05:56:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.144 05:56:38 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:17.144 00:16:17.144 real 0m14.369s 00:16:17.144 user 0m22.795s 00:16:17.145 sys 0m2.309s 00:16:17.145 05:56:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:17.145 05:56:38 -- common/autotest_common.sh@10 -- # set +x 00:16:17.145 ************************************ 00:16:17.145 END TEST nvmf_discovery_remove_ifc 00:16:17.145 ************************************ 00:16:17.145 05:56:38 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:16:17.145 05:56:38 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:17.145 05:56:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:17.145 05:56:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:17.145 05:56:38 -- common/autotest_common.sh@10 -- # set +x 00:16:17.145 ************************************ 00:16:17.145 START TEST nvmf_digest 00:16:17.145 ************************************ 00:16:17.145 05:56:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:17.404 * Looking for test storage... 00:16:17.404 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:17.404 05:56:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:17.404 05:56:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:17.404 05:56:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:17.404 05:56:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:17.404 05:56:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:17.404 05:56:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:17.404 05:56:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:17.404 05:56:38 -- scripts/common.sh@335 -- # IFS=.-: 00:16:17.404 05:56:38 -- scripts/common.sh@335 -- # read -ra ver1 00:16:17.404 05:56:38 -- scripts/common.sh@336 -- # IFS=.-: 00:16:17.404 05:56:38 -- scripts/common.sh@336 -- # read -ra ver2 00:16:17.404 05:56:38 -- scripts/common.sh@337 -- # local 'op=<' 00:16:17.404 05:56:38 -- scripts/common.sh@339 -- # ver1_l=2 00:16:17.404 05:56:38 -- scripts/common.sh@340 -- # ver2_l=1 00:16:17.404 05:56:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:17.404 05:56:38 -- scripts/common.sh@343 -- # case "$op" in 00:16:17.404 05:56:38 -- scripts/common.sh@344 -- # : 1 00:16:17.404 05:56:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:17.404 05:56:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:17.404 05:56:38 -- scripts/common.sh@364 -- # decimal 1 00:16:17.404 05:56:38 -- scripts/common.sh@352 -- # local d=1 00:16:17.404 05:56:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:17.404 05:56:38 -- scripts/common.sh@354 -- # echo 1 00:16:17.404 05:56:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:17.404 05:56:38 -- scripts/common.sh@365 -- # decimal 2 00:16:17.404 05:56:38 -- scripts/common.sh@352 -- # local d=2 00:16:17.404 05:56:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:17.404 05:56:38 -- scripts/common.sh@354 -- # echo 2 00:16:17.404 05:56:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:17.404 05:56:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:17.404 05:56:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:17.404 05:56:38 -- scripts/common.sh@367 -- # return 0 00:16:17.404 05:56:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:17.404 05:56:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:17.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.404 --rc genhtml_branch_coverage=1 00:16:17.404 --rc genhtml_function_coverage=1 00:16:17.404 --rc genhtml_legend=1 00:16:17.404 --rc geninfo_all_blocks=1 00:16:17.404 --rc geninfo_unexecuted_blocks=1 00:16:17.404 00:16:17.404 ' 00:16:17.404 05:56:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:17.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.404 --rc genhtml_branch_coverage=1 00:16:17.404 --rc genhtml_function_coverage=1 00:16:17.404 --rc genhtml_legend=1 00:16:17.404 --rc geninfo_all_blocks=1 00:16:17.404 --rc geninfo_unexecuted_blocks=1 00:16:17.404 00:16:17.404 ' 00:16:17.404 05:56:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:17.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.404 --rc genhtml_branch_coverage=1 00:16:17.404 --rc genhtml_function_coverage=1 00:16:17.404 --rc genhtml_legend=1 00:16:17.404 --rc geninfo_all_blocks=1 00:16:17.404 --rc geninfo_unexecuted_blocks=1 00:16:17.404 00:16:17.404 ' 00:16:17.404 05:56:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:17.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.404 --rc genhtml_branch_coverage=1 00:16:17.404 --rc genhtml_function_coverage=1 00:16:17.405 --rc genhtml_legend=1 00:16:17.405 --rc geninfo_all_blocks=1 00:16:17.405 --rc geninfo_unexecuted_blocks=1 00:16:17.405 00:16:17.405 ' 00:16:17.405 05:56:38 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:17.405 05:56:38 -- nvmf/common.sh@7 -- # uname -s 00:16:17.405 05:56:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.405 05:56:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.405 05:56:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.405 05:56:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.405 05:56:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.405 05:56:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.405 05:56:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.405 05:56:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.405 05:56:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.405 05:56:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.405 05:56:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:16:17.405 05:56:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:16:17.405 05:56:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.405 05:56:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.405 05:56:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:17.405 05:56:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:17.405 05:56:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.405 05:56:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.405 05:56:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.405 05:56:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.405 05:56:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.405 05:56:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.405 05:56:38 -- paths/export.sh@5 -- # export PATH 00:16:17.405 05:56:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.405 05:56:38 -- nvmf/common.sh@46 -- # : 0 00:16:17.405 05:56:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:17.405 05:56:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:17.405 05:56:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:17.405 05:56:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.405 05:56:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.405 05:56:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:17.405 05:56:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:17.405 05:56:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:17.405 05:56:38 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:17.405 05:56:38 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:16:17.405 05:56:38 -- host/digest.sh@16 -- # runtime=2 00:16:17.405 05:56:38 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:16:17.405 05:56:38 -- host/digest.sh@132 -- # nvmftestinit 00:16:17.405 05:56:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:17.405 05:56:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:17.405 05:56:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:17.405 05:56:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:17.405 05:56:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:17.405 05:56:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.405 05:56:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:17.405 05:56:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.405 05:56:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:17.405 05:56:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:17.405 05:56:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:17.405 05:56:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:17.405 05:56:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:17.405 05:56:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:17.405 05:56:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:17.405 05:56:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:17.405 05:56:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:17.405 05:56:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:17.405 05:56:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:17.405 05:56:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:17.405 05:56:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:17.405 05:56:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:17.405 05:56:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:17.405 05:56:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:17.405 05:56:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:17.405 05:56:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:17.405 05:56:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:17.405 05:56:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:17.405 Cannot find device "nvmf_tgt_br" 00:16:17.405 05:56:38 -- nvmf/common.sh@154 -- # true 00:16:17.405 05:56:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:17.405 Cannot find device "nvmf_tgt_br2" 00:16:17.405 05:56:38 -- nvmf/common.sh@155 -- # true 00:16:17.405 05:56:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:17.405 05:56:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:17.405 Cannot find device "nvmf_tgt_br" 00:16:17.405 05:56:38 -- nvmf/common.sh@157 -- # true 00:16:17.405 05:56:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:17.405 Cannot find device "nvmf_tgt_br2" 00:16:17.405 05:56:39 -- nvmf/common.sh@158 -- # true 00:16:17.405 05:56:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:17.664 05:56:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:17.664 05:56:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:17.664 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:17.664 05:56:39 -- nvmf/common.sh@161 -- # true 00:16:17.664 05:56:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:17.664 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:17.664 05:56:39 -- nvmf/common.sh@162 -- # true 00:16:17.664 05:56:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:17.664 05:56:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:17.664 05:56:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:17.664 05:56:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:17.664 05:56:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:17.664 05:56:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:17.664 05:56:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:17.664 05:56:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:17.664 05:56:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:17.664 05:56:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:17.664 05:56:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:17.664 05:56:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:17.664 05:56:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:17.664 05:56:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:17.664 05:56:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:17.664 05:56:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:17.664 05:56:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:17.665 05:56:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:17.665 05:56:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:17.665 05:56:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:17.665 05:56:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:17.665 05:56:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:17.665 05:56:39 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:17.665 05:56:39 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:17.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:17.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:16:17.665 00:16:17.665 --- 10.0.0.2 ping statistics --- 00:16:17.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.665 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:17.665 05:56:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:17.665 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:17.665 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:16:17.665 00:16:17.665 --- 10.0.0.3 ping statistics --- 00:16:17.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.665 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:17.665 05:56:39 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:17.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:17.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:16:17.665 00:16:17.665 --- 10.0.0.1 ping statistics --- 00:16:17.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.665 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:16:17.665 05:56:39 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:17.665 05:56:39 -- nvmf/common.sh@421 -- # return 0 00:16:17.665 05:56:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:17.665 05:56:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:17.665 05:56:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:17.665 05:56:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:17.665 05:56:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:17.665 05:56:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:17.665 05:56:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:17.665 05:56:39 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:17.665 05:56:39 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:16:17.665 05:56:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:17.665 05:56:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:17.665 05:56:39 -- common/autotest_common.sh@10 -- # set +x 00:16:17.665 ************************************ 00:16:17.665 START TEST nvmf_digest_clean 00:16:17.665 ************************************ 00:16:17.665 05:56:39 -- common/autotest_common.sh@1114 -- # run_digest 00:16:17.665 05:56:39 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:16:17.665 05:56:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:17.665 05:56:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:17.665 05:56:39 -- common/autotest_common.sh@10 -- # set +x 00:16:17.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.665 05:56:39 -- nvmf/common.sh@469 -- # nvmfpid=83202 00:16:17.665 05:56:39 -- nvmf/common.sh@470 -- # waitforlisten 83202 00:16:17.665 05:56:39 -- common/autotest_common.sh@829 -- # '[' -z 83202 ']' 00:16:17.665 05:56:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.665 05:56:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:17.665 05:56:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.665 05:56:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:17.665 05:56:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:17.665 05:56:39 -- common/autotest_common.sh@10 -- # set +x 00:16:17.924 [2024-12-15 05:56:39.349790] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:17.924 [2024-12-15 05:56:39.349925] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.924 [2024-12-15 05:56:39.480074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.924 [2024-12-15 05:56:39.512068] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:17.924 [2024-12-15 05:56:39.512212] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.924 [2024-12-15 05:56:39.512224] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.924 [2024-12-15 05:56:39.512232] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.924 [2024-12-15 05:56:39.512260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.861 05:56:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:18.861 05:56:40 -- common/autotest_common.sh@862 -- # return 0 00:16:18.861 05:56:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:18.861 05:56:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:18.861 05:56:40 -- common/autotest_common.sh@10 -- # set +x 00:16:18.861 05:56:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.861 05:56:40 -- host/digest.sh@120 -- # common_target_config 00:16:18.861 05:56:40 -- host/digest.sh@43 -- # rpc_cmd 00:16:18.861 05:56:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.861 05:56:40 -- common/autotest_common.sh@10 -- # set +x 00:16:18.861 null0 00:16:18.861 [2024-12-15 05:56:40.413051] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.861 [2024-12-15 05:56:40.437151] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.861 05:56:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.861 05:56:40 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:16:18.861 05:56:40 -- host/digest.sh@77 -- # local rw bs qd 00:16:18.861 05:56:40 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:18.861 05:56:40 -- host/digest.sh@80 -- # rw=randread 00:16:18.861 05:56:40 -- host/digest.sh@80 -- # bs=4096 00:16:18.861 05:56:40 -- host/digest.sh@80 -- # qd=128 00:16:18.861 05:56:40 -- host/digest.sh@82 -- # bperfpid=83239 00:16:18.861 05:56:40 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:18.861 05:56:40 -- host/digest.sh@83 -- # waitforlisten 83239 /var/tmp/bperf.sock 00:16:18.861 05:56:40 -- common/autotest_common.sh@829 -- # '[' -z 83239 ']' 00:16:18.861 05:56:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:18.861 05:56:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:18.861 05:56:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:18.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:18.861 05:56:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:18.861 05:56:40 -- common/autotest_common.sh@10 -- # set +x 00:16:18.861 [2024-12-15 05:56:40.486894] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:18.861 [2024-12-15 05:56:40.487244] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83239 ] 00:16:19.121 [2024-12-15 05:56:40.623663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.121 [2024-12-15 05:56:40.663315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.121 05:56:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:19.121 05:56:40 -- common/autotest_common.sh@862 -- # return 0 00:16:19.121 05:56:40 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:19.121 05:56:40 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:19.121 05:56:40 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:19.380 05:56:40 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:19.380 05:56:40 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:19.948 nvme0n1 00:16:19.948 05:56:41 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:19.948 05:56:41 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:19.948 Running I/O for 2 seconds... 00:16:21.851 00:16:21.851 Latency(us) 00:16:21.851 [2024-12-15T05:56:43.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.851 [2024-12-15T05:56:43.492Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:21.851 nvme0n1 : 2.01 16575.66 64.75 0.00 0.00 7716.85 7000.44 18469.24 00:16:21.851 [2024-12-15T05:56:43.493Z] =================================================================================================================== 00:16:21.852 [2024-12-15T05:56:43.493Z] Total : 16575.66 64.75 0.00 0.00 7716.85 7000.44 18469.24 00:16:21.852 0 00:16:21.852 05:56:43 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:21.852 05:56:43 -- host/digest.sh@92 -- # get_accel_stats 00:16:21.852 05:56:43 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:21.852 05:56:43 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:21.852 | select(.opcode=="crc32c") 00:16:21.852 | "\(.module_name) \(.executed)"' 00:16:21.852 05:56:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:22.111 05:56:43 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:22.111 05:56:43 -- host/digest.sh@93 -- # exp_module=software 00:16:22.111 05:56:43 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:22.111 05:56:43 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:22.111 05:56:43 -- host/digest.sh@97 -- # killprocess 83239 00:16:22.111 05:56:43 -- common/autotest_common.sh@936 -- # '[' -z 83239 ']' 00:16:22.111 05:56:43 -- common/autotest_common.sh@940 -- # kill -0 83239 00:16:22.111 05:56:43 -- common/autotest_common.sh@941 -- # uname 00:16:22.111 05:56:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:22.111 05:56:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83239 00:16:22.111 killing process with pid 83239 00:16:22.111 Received shutdown signal, test time was about 2.000000 seconds 00:16:22.111 00:16:22.111 Latency(us) 00:16:22.111 [2024-12-15T05:56:43.752Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.111 [2024-12-15T05:56:43.752Z] =================================================================================================================== 00:16:22.111 [2024-12-15T05:56:43.752Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:22.111 05:56:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:22.111 05:56:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:22.111 05:56:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83239' 00:16:22.111 05:56:43 -- common/autotest_common.sh@955 -- # kill 83239 00:16:22.111 05:56:43 -- common/autotest_common.sh@960 -- # wait 83239 00:16:22.370 05:56:43 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:16:22.370 05:56:43 -- host/digest.sh@77 -- # local rw bs qd 00:16:22.370 05:56:43 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:22.370 05:56:43 -- host/digest.sh@80 -- # rw=randread 00:16:22.370 05:56:43 -- host/digest.sh@80 -- # bs=131072 00:16:22.370 05:56:43 -- host/digest.sh@80 -- # qd=16 00:16:22.370 05:56:43 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:22.370 05:56:43 -- host/digest.sh@82 -- # bperfpid=83287 00:16:22.370 05:56:43 -- host/digest.sh@83 -- # waitforlisten 83287 /var/tmp/bperf.sock 00:16:22.370 05:56:43 -- common/autotest_common.sh@829 -- # '[' -z 83287 ']' 00:16:22.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:22.371 05:56:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:22.371 05:56:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:22.371 05:56:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:22.371 05:56:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:22.371 05:56:43 -- common/autotest_common.sh@10 -- # set +x 00:16:22.371 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:22.371 Zero copy mechanism will not be used. 00:16:22.371 [2024-12-15 05:56:43.901376] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:22.371 [2024-12-15 05:56:43.901465] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83287 ] 00:16:22.630 [2024-12-15 05:56:44.032990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.630 [2024-12-15 05:56:44.066026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.630 05:56:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:22.630 05:56:44 -- common/autotest_common.sh@862 -- # return 0 00:16:22.630 05:56:44 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:22.630 05:56:44 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:22.630 05:56:44 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:22.888 05:56:44 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:22.888 05:56:44 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:23.146 nvme0n1 00:16:23.146 05:56:44 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:23.146 05:56:44 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:23.146 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:23.146 Zero copy mechanism will not be used. 00:16:23.146 Running I/O for 2 seconds... 00:16:25.678 00:16:25.678 Latency(us) 00:16:25.678 [2024-12-15T05:56:47.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.678 [2024-12-15T05:56:47.319Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:25.678 nvme0n1 : 2.00 8237.67 1029.71 0.00 0.00 1939.38 1675.64 4855.62 00:16:25.678 [2024-12-15T05:56:47.319Z] =================================================================================================================== 00:16:25.678 [2024-12-15T05:56:47.319Z] Total : 8237.67 1029.71 0.00 0.00 1939.38 1675.64 4855.62 00:16:25.678 0 00:16:25.678 05:56:46 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:25.678 05:56:46 -- host/digest.sh@92 -- # get_accel_stats 00:16:25.678 05:56:46 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:25.678 05:56:46 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:25.678 | select(.opcode=="crc32c") 00:16:25.678 | "\(.module_name) \(.executed)"' 00:16:25.678 05:56:46 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:25.678 05:56:47 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:25.678 05:56:47 -- host/digest.sh@93 -- # exp_module=software 00:16:25.678 05:56:47 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:25.678 05:56:47 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:25.678 05:56:47 -- host/digest.sh@97 -- # killprocess 83287 00:16:25.678 05:56:47 -- common/autotest_common.sh@936 -- # '[' -z 83287 ']' 00:16:25.678 05:56:47 -- common/autotest_common.sh@940 -- # kill -0 83287 00:16:25.678 05:56:47 -- common/autotest_common.sh@941 -- # uname 00:16:25.678 05:56:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:25.679 05:56:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83287 00:16:25.679 killing process with pid 83287 00:16:25.679 Received shutdown signal, test time was about 2.000000 seconds 00:16:25.679 00:16:25.679 Latency(us) 00:16:25.679 [2024-12-15T05:56:47.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.679 [2024-12-15T05:56:47.320Z] =================================================================================================================== 00:16:25.679 [2024-12-15T05:56:47.320Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:25.679 05:56:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:25.679 05:56:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:25.679 05:56:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83287' 00:16:25.679 05:56:47 -- common/autotest_common.sh@955 -- # kill 83287 00:16:25.679 05:56:47 -- common/autotest_common.sh@960 -- # wait 83287 00:16:25.679 05:56:47 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:16:25.679 05:56:47 -- host/digest.sh@77 -- # local rw bs qd 00:16:25.679 05:56:47 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:25.679 05:56:47 -- host/digest.sh@80 -- # rw=randwrite 00:16:25.679 05:56:47 -- host/digest.sh@80 -- # bs=4096 00:16:25.679 05:56:47 -- host/digest.sh@80 -- # qd=128 00:16:25.679 05:56:47 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:25.679 05:56:47 -- host/digest.sh@82 -- # bperfpid=83334 00:16:25.679 05:56:47 -- host/digest.sh@83 -- # waitforlisten 83334 /var/tmp/bperf.sock 00:16:25.679 05:56:47 -- common/autotest_common.sh@829 -- # '[' -z 83334 ']' 00:16:25.679 05:56:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:25.679 05:56:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:25.679 05:56:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:25.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:25.679 05:56:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:25.679 05:56:47 -- common/autotest_common.sh@10 -- # set +x 00:16:25.679 [2024-12-15 05:56:47.286332] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:25.679 [2024-12-15 05:56:47.286697] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83334 ] 00:16:25.938 [2024-12-15 05:56:47.418549] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.938 [2024-12-15 05:56:47.451549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.938 05:56:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:25.938 05:56:47 -- common/autotest_common.sh@862 -- # return 0 00:16:25.938 05:56:47 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:25.938 05:56:47 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:25.938 05:56:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:26.197 05:56:47 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:26.197 05:56:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:26.456 nvme0n1 00:16:26.456 05:56:48 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:26.456 05:56:48 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:26.715 Running I/O for 2 seconds... 00:16:28.656 00:16:28.656 Latency(us) 00:16:28.656 [2024-12-15T05:56:50.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.656 [2024-12-15T05:56:50.297Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:28.656 nvme0n1 : 2.00 17509.80 68.40 0.00 0.00 7304.36 6583.39 16086.11 00:16:28.656 [2024-12-15T05:56:50.297Z] =================================================================================================================== 00:16:28.656 [2024-12-15T05:56:50.297Z] Total : 17509.80 68.40 0.00 0.00 7304.36 6583.39 16086.11 00:16:28.656 0 00:16:28.656 05:56:50 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:28.656 05:56:50 -- host/digest.sh@92 -- # get_accel_stats 00:16:28.656 05:56:50 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:28.656 | select(.opcode=="crc32c") 00:16:28.656 | "\(.module_name) \(.executed)"' 00:16:28.656 05:56:50 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:28.656 05:56:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:28.915 05:56:50 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:28.915 05:56:50 -- host/digest.sh@93 -- # exp_module=software 00:16:28.915 05:56:50 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:28.915 05:56:50 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:28.915 05:56:50 -- host/digest.sh@97 -- # killprocess 83334 00:16:28.915 05:56:50 -- common/autotest_common.sh@936 -- # '[' -z 83334 ']' 00:16:28.915 05:56:50 -- common/autotest_common.sh@940 -- # kill -0 83334 00:16:28.916 05:56:50 -- common/autotest_common.sh@941 -- # uname 00:16:28.916 05:56:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:28.916 05:56:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83334 00:16:28.916 killing process with pid 83334 00:16:28.916 Received shutdown signal, test time was about 2.000000 seconds 00:16:28.916 00:16:28.916 Latency(us) 00:16:28.916 [2024-12-15T05:56:50.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.916 [2024-12-15T05:56:50.557Z] =================================================================================================================== 00:16:28.916 [2024-12-15T05:56:50.557Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:28.916 05:56:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:28.916 05:56:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:28.916 05:56:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83334' 00:16:28.916 05:56:50 -- common/autotest_common.sh@955 -- # kill 83334 00:16:28.916 05:56:50 -- common/autotest_common.sh@960 -- # wait 83334 00:16:29.175 05:56:50 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:16:29.175 05:56:50 -- host/digest.sh@77 -- # local rw bs qd 00:16:29.175 05:56:50 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:29.175 05:56:50 -- host/digest.sh@80 -- # rw=randwrite 00:16:29.175 05:56:50 -- host/digest.sh@80 -- # bs=131072 00:16:29.175 05:56:50 -- host/digest.sh@80 -- # qd=16 00:16:29.175 05:56:50 -- host/digest.sh@82 -- # bperfpid=83382 00:16:29.175 05:56:50 -- host/digest.sh@83 -- # waitforlisten 83382 /var/tmp/bperf.sock 00:16:29.175 05:56:50 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:29.175 05:56:50 -- common/autotest_common.sh@829 -- # '[' -z 83382 ']' 00:16:29.175 05:56:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:29.175 05:56:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:29.175 05:56:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:29.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:29.175 05:56:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:29.175 05:56:50 -- common/autotest_common.sh@10 -- # set +x 00:16:29.175 [2024-12-15 05:56:50.664782] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:29.175 [2024-12-15 05:56:50.665101] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83382 ] 00:16:29.175 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:29.175 Zero copy mechanism will not be used. 00:16:29.175 [2024-12-15 05:56:50.803895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.434 [2024-12-15 05:56:50.839490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.434 05:56:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:29.434 05:56:50 -- common/autotest_common.sh@862 -- # return 0 00:16:29.434 05:56:50 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:29.434 05:56:50 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:29.434 05:56:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:29.693 05:56:51 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:29.693 05:56:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:29.952 nvme0n1 00:16:30.211 05:56:51 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:30.211 05:56:51 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:30.211 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:30.211 Zero copy mechanism will not be used. 00:16:30.211 Running I/O for 2 seconds... 00:16:32.116 00:16:32.116 Latency(us) 00:16:32.116 [2024-12-15T05:56:53.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.116 [2024-12-15T05:56:53.757Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:32.116 nvme0n1 : 2.00 6963.94 870.49 0.00 0.00 2292.55 1832.03 5719.51 00:16:32.116 [2024-12-15T05:56:53.757Z] =================================================================================================================== 00:16:32.116 [2024-12-15T05:56:53.757Z] Total : 6963.94 870.49 0.00 0.00 2292.55 1832.03 5719.51 00:16:32.116 0 00:16:32.116 05:56:53 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:32.375 05:56:53 -- host/digest.sh@92 -- # get_accel_stats 00:16:32.375 05:56:53 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:32.375 05:56:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:32.375 05:56:53 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:32.375 | select(.opcode=="crc32c") 00:16:32.375 | "\(.module_name) \(.executed)"' 00:16:32.634 05:56:54 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:32.634 05:56:54 -- host/digest.sh@93 -- # exp_module=software 00:16:32.634 05:56:54 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:32.634 05:56:54 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:32.634 05:56:54 -- host/digest.sh@97 -- # killprocess 83382 00:16:32.634 05:56:54 -- common/autotest_common.sh@936 -- # '[' -z 83382 ']' 00:16:32.634 05:56:54 -- common/autotest_common.sh@940 -- # kill -0 83382 00:16:32.634 05:56:54 -- common/autotest_common.sh@941 -- # uname 00:16:32.634 05:56:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:32.634 05:56:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83382 00:16:32.634 killing process with pid 83382 00:16:32.634 Received shutdown signal, test time was about 2.000000 seconds 00:16:32.634 00:16:32.634 Latency(us) 00:16:32.634 [2024-12-15T05:56:54.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.634 [2024-12-15T05:56:54.275Z] =================================================================================================================== 00:16:32.634 [2024-12-15T05:56:54.275Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:32.634 05:56:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:32.634 05:56:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:32.634 05:56:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83382' 00:16:32.634 05:56:54 -- common/autotest_common.sh@955 -- # kill 83382 00:16:32.634 05:56:54 -- common/autotest_common.sh@960 -- # wait 83382 00:16:32.634 05:56:54 -- host/digest.sh@126 -- # killprocess 83202 00:16:32.634 05:56:54 -- common/autotest_common.sh@936 -- # '[' -z 83202 ']' 00:16:32.634 05:56:54 -- common/autotest_common.sh@940 -- # kill -0 83202 00:16:32.634 05:56:54 -- common/autotest_common.sh@941 -- # uname 00:16:32.634 05:56:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:32.634 05:56:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83202 00:16:32.634 killing process with pid 83202 00:16:32.634 05:56:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:32.634 05:56:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:32.634 05:56:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83202' 00:16:32.634 05:56:54 -- common/autotest_common.sh@955 -- # kill 83202 00:16:32.634 05:56:54 -- common/autotest_common.sh@960 -- # wait 83202 00:16:32.893 ************************************ 00:16:32.893 END TEST nvmf_digest_clean 00:16:32.893 ************************************ 00:16:32.893 00:16:32.893 real 0m15.074s 00:16:32.893 user 0m28.712s 00:16:32.893 sys 0m4.192s 00:16:32.893 05:56:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:32.893 05:56:54 -- common/autotest_common.sh@10 -- # set +x 00:16:32.893 05:56:54 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:16:32.893 05:56:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:32.893 05:56:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:32.893 05:56:54 -- common/autotest_common.sh@10 -- # set +x 00:16:32.893 ************************************ 00:16:32.893 START TEST nvmf_digest_error 00:16:32.893 ************************************ 00:16:32.893 05:56:54 -- common/autotest_common.sh@1114 -- # run_digest_error 00:16:32.893 05:56:54 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:16:32.893 05:56:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:32.893 05:56:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:32.893 05:56:54 -- common/autotest_common.sh@10 -- # set +x 00:16:32.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.893 05:56:54 -- nvmf/common.sh@469 -- # nvmfpid=83460 00:16:32.893 05:56:54 -- nvmf/common.sh@470 -- # waitforlisten 83460 00:16:32.893 05:56:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:32.893 05:56:54 -- common/autotest_common.sh@829 -- # '[' -z 83460 ']' 00:16:32.893 05:56:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.893 05:56:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:32.893 05:56:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.893 05:56:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:32.893 05:56:54 -- common/autotest_common.sh@10 -- # set +x 00:16:32.893 [2024-12-15 05:56:54.466990] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:32.893 [2024-12-15 05:56:54.467065] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.153 [2024-12-15 05:56:54.598025] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.153 [2024-12-15 05:56:54.629647] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:33.153 [2024-12-15 05:56:54.629778] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.153 [2024-12-15 05:56:54.629790] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.153 [2024-12-15 05:56:54.629797] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.153 [2024-12-15 05:56:54.629820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.153 05:56:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:33.153 05:56:54 -- common/autotest_common.sh@862 -- # return 0 00:16:33.153 05:56:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:33.153 05:56:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:33.153 05:56:54 -- common/autotest_common.sh@10 -- # set +x 00:16:33.153 05:56:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:33.153 05:56:54 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:16:33.153 05:56:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.153 05:56:54 -- common/autotest_common.sh@10 -- # set +x 00:16:33.153 [2024-12-15 05:56:54.754347] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:16:33.153 05:56:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.153 05:56:54 -- host/digest.sh@104 -- # common_target_config 00:16:33.153 05:56:54 -- host/digest.sh@43 -- # rpc_cmd 00:16:33.153 05:56:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.153 05:56:54 -- common/autotest_common.sh@10 -- # set +x 00:16:33.491 null0 00:16:33.491 [2024-12-15 05:56:54.820075] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:33.491 [2024-12-15 05:56:54.844180] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:33.491 05:56:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.491 05:56:54 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:16:33.491 05:56:54 -- host/digest.sh@54 -- # local rw bs qd 00:16:33.491 05:56:54 -- host/digest.sh@56 -- # rw=randread 00:16:33.491 05:56:54 -- host/digest.sh@56 -- # bs=4096 00:16:33.491 05:56:54 -- host/digest.sh@56 -- # qd=128 00:16:33.491 05:56:54 -- host/digest.sh@58 -- # bperfpid=83485 00:16:33.491 05:56:54 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:16:33.491 05:56:54 -- host/digest.sh@60 -- # waitforlisten 83485 /var/tmp/bperf.sock 00:16:33.491 05:56:54 -- common/autotest_common.sh@829 -- # '[' -z 83485 ']' 00:16:33.491 05:56:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:33.491 05:56:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:33.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:33.491 05:56:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:33.491 05:56:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:33.491 05:56:54 -- common/autotest_common.sh@10 -- # set +x 00:16:33.491 [2024-12-15 05:56:54.892358] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:33.491 [2024-12-15 05:56:54.892424] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83485 ] 00:16:33.491 [2024-12-15 05:56:55.028360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.491 [2024-12-15 05:56:55.061845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.767 05:56:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:33.767 05:56:55 -- common/autotest_common.sh@862 -- # return 0 00:16:33.767 05:56:55 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:33.767 05:56:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:33.767 05:56:55 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:33.767 05:56:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.767 05:56:55 -- common/autotest_common.sh@10 -- # set +x 00:16:33.767 05:56:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.767 05:56:55 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:33.767 05:56:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:34.026 nvme0n1 00:16:34.026 05:56:55 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:34.026 05:56:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.026 05:56:55 -- common/autotest_common.sh@10 -- # set +x 00:16:34.285 05:56:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.285 05:56:55 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:34.285 05:56:55 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:34.285 Running I/O for 2 seconds... 00:16:34.285 [2024-12-15 05:56:55.814182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.285 [2024-12-15 05:56:55.814228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.285 [2024-12-15 05:56:55.814241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.285 [2024-12-15 05:56:55.829169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.285 [2024-12-15 05:56:55.829201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.285 [2024-12-15 05:56:55.829213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.285 [2024-12-15 05:56:55.844320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.285 [2024-12-15 05:56:55.844352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.285 [2024-12-15 05:56:55.844378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.285 [2024-12-15 05:56:55.859198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.285 [2024-12-15 05:56:55.859232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.285 [2024-12-15 05:56:55.859245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.285 [2024-12-15 05:56:55.875801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.285 [2024-12-15 05:56:55.875833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.285 [2024-12-15 05:56:55.875845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.285 [2024-12-15 05:56:55.891723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.285 [2024-12-15 05:56:55.891754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.285 [2024-12-15 05:56:55.891765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.285 [2024-12-15 05:56:55.906682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.285 [2024-12-15 05:56:55.906713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.285 [2024-12-15 05:56:55.906724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.285 [2024-12-15 05:56:55.922425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.285 [2024-12-15 05:56:55.922461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.285 [2024-12-15 05:56:55.922474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.545 [2024-12-15 05:56:55.938155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.545 [2024-12-15 05:56:55.938187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.545 [2024-12-15 05:56:55.938198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.545 [2024-12-15 05:56:55.953317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.545 [2024-12-15 05:56:55.953348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.545 [2024-12-15 05:56:55.953359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.545 [2024-12-15 05:56:55.968660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.545 [2024-12-15 05:56:55.968691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.545 [2024-12-15 05:56:55.968703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.545 [2024-12-15 05:56:55.983749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.545 [2024-12-15 05:56:55.983780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.545 [2024-12-15 05:56:55.983791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.545 [2024-12-15 05:56:55.998662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.545 [2024-12-15 05:56:55.998693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.545 [2024-12-15 05:56:55.998704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.545 [2024-12-15 05:56:56.013672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.545 [2024-12-15 05:56:56.013702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.545 [2024-12-15 05:56:56.013714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.545 [2024-12-15 05:56:56.028934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.545 [2024-12-15 05:56:56.028964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.545 [2024-12-15 05:56:56.028975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.545 [2024-12-15 05:56:56.043913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.545 [2024-12-15 05:56:56.043954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.545 [2024-12-15 05:56:56.043966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.545 [2024-12-15 05:56:56.058640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.545 [2024-12-15 05:56:56.058670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.545 [2024-12-15 05:56:56.058681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.545 [2024-12-15 05:56:56.075698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.545 [2024-12-15 05:56:56.075749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.545 [2024-12-15 05:56:56.075761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.545 [2024-12-15 05:56:56.093529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.545 [2024-12-15 05:56:56.093562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.545 [2024-12-15 05:56:56.093574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.545 [2024-12-15 05:56:56.110840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.545 [2024-12-15 05:56:56.110900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.545 [2024-12-15 05:56:56.110916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.545 [2024-12-15 05:56:56.128292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.545 [2024-12-15 05:56:56.128339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.545 [2024-12-15 05:56:56.128351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.545 [2024-12-15 05:56:56.145732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.545 [2024-12-15 05:56:56.145762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.545 [2024-12-15 05:56:56.145773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.545 [2024-12-15 05:56:56.161766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.545 [2024-12-15 05:56:56.161796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.545 [2024-12-15 05:56:56.161808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.545 [2024-12-15 05:56:56.177472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.545 [2024-12-15 05:56:56.177503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.545 [2024-12-15 05:56:56.177515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.804 [2024-12-15 05:56:56.193748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.804 [2024-12-15 05:56:56.193778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.805 [2024-12-15 05:56:56.193790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.805 [2024-12-15 05:56:56.209463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.805 [2024-12-15 05:56:56.209494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.805 [2024-12-15 05:56:56.209505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.805 [2024-12-15 05:56:56.225031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.805 [2024-12-15 05:56:56.225062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.805 [2024-12-15 05:56:56.225073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.805 [2024-12-15 05:56:56.240042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.805 [2024-12-15 05:56:56.240071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.805 [2024-12-15 05:56:56.240083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.805 [2024-12-15 05:56:56.254573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.805 [2024-12-15 05:56:56.254603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.805 [2024-12-15 05:56:56.254614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.805 [2024-12-15 05:56:56.269260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.805 [2024-12-15 05:56:56.269290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.805 [2024-12-15 05:56:56.269301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.805 [2024-12-15 05:56:56.283980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.805 [2024-12-15 05:56:56.284010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.805 [2024-12-15 05:56:56.284021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.805 [2024-12-15 05:56:56.298660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.805 [2024-12-15 05:56:56.298692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.805 [2024-12-15 05:56:56.298705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.805 [2024-12-15 05:56:56.315381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.805 [2024-12-15 05:56:56.315415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.805 [2024-12-15 05:56:56.315428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.805 [2024-12-15 05:56:56.331580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.805 [2024-12-15 05:56:56.331610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.805 [2024-12-15 05:56:56.331622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.805 [2024-12-15 05:56:56.347014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.805 [2024-12-15 05:56:56.347044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.805 [2024-12-15 05:56:56.347055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.805 [2024-12-15 05:56:56.362415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.805 [2024-12-15 05:56:56.362445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.805 [2024-12-15 05:56:56.362456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.805 [2024-12-15 05:56:56.377941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.805 [2024-12-15 05:56:56.377998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.805 [2024-12-15 05:56:56.378026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.805 [2024-12-15 05:56:56.393483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.805 [2024-12-15 05:56:56.393514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.805 [2024-12-15 05:56:56.393525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.805 [2024-12-15 05:56:56.410548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.805 [2024-12-15 05:56:56.410579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.805 [2024-12-15 05:56:56.410591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:34.805 [2024-12-15 05:56:56.427014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:34.805 [2024-12-15 05:56:56.427047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.805 [2024-12-15 05:56:56.427059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.064 [2024-12-15 05:56:56.442752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.065 [2024-12-15 05:56:56.442786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.065 [2024-12-15 05:56:56.442798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.065 [2024-12-15 05:56:56.458297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.065 [2024-12-15 05:56:56.458329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.065 [2024-12-15 05:56:56.458340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.065 [2024-12-15 05:56:56.474117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.065 [2024-12-15 05:56:56.474150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.065 [2024-12-15 05:56:56.474162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.065 [2024-12-15 05:56:56.489735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.065 [2024-12-15 05:56:56.489765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.065 [2024-12-15 05:56:56.489776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.065 [2024-12-15 05:56:56.504713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.065 [2024-12-15 05:56:56.504743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.065 [2024-12-15 05:56:56.504754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.065 [2024-12-15 05:56:56.519893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.065 [2024-12-15 05:56:56.519934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.065 [2024-12-15 05:56:56.519945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.065 [2024-12-15 05:56:56.534717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.065 [2024-12-15 05:56:56.534746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.065 [2024-12-15 05:56:56.534757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.065 [2024-12-15 05:56:56.549786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.065 [2024-12-15 05:56:56.549817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.065 [2024-12-15 05:56:56.549828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.065 [2024-12-15 05:56:56.564714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.065 [2024-12-15 05:56:56.564743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.065 [2024-12-15 05:56:56.564754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.065 [2024-12-15 05:56:56.579685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.065 [2024-12-15 05:56:56.579715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.065 [2024-12-15 05:56:56.579726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.065 [2024-12-15 05:56:56.594515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.065 [2024-12-15 05:56:56.594545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.065 [2024-12-15 05:56:56.594556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.065 [2024-12-15 05:56:56.609472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.065 [2024-12-15 05:56:56.609500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.065 [2024-12-15 05:56:56.609511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.065 [2024-12-15 05:56:56.624386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.065 [2024-12-15 05:56:56.624416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.065 [2024-12-15 05:56:56.624427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.065 [2024-12-15 05:56:56.639539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.065 [2024-12-15 05:56:56.639603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.065 [2024-12-15 05:56:56.639615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.065 [2024-12-15 05:56:56.654880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.065 [2024-12-15 05:56:56.654928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.065 [2024-12-15 05:56:56.654942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.065 [2024-12-15 05:56:56.671010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.065 [2024-12-15 05:56:56.671041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.065 [2024-12-15 05:56:56.671053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.065 [2024-12-15 05:56:56.685914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.065 [2024-12-15 05:56:56.685943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.065 [2024-12-15 05:56:56.685953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.065 [2024-12-15 05:56:56.701656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.065 [2024-12-15 05:56:56.701688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.065 [2024-12-15 05:56:56.701700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.325 [2024-12-15 05:56:56.717560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.325 [2024-12-15 05:56:56.717590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.325 [2024-12-15 05:56:56.717601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.325 [2024-12-15 05:56:56.732678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.325 [2024-12-15 05:56:56.732708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.325 [2024-12-15 05:56:56.732719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.325 [2024-12-15 05:56:56.747820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.325 [2024-12-15 05:56:56.747849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.325 [2024-12-15 05:56:56.747859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.325 [2024-12-15 05:56:56.764014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.325 [2024-12-15 05:56:56.764046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.325 [2024-12-15 05:56:56.764058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.325 [2024-12-15 05:56:56.781028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.325 [2024-12-15 05:56:56.781060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.325 [2024-12-15 05:56:56.781072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.325 [2024-12-15 05:56:56.802835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.325 [2024-12-15 05:56:56.802865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.325 [2024-12-15 05:56:56.802902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.325 [2024-12-15 05:56:56.817745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.325 [2024-12-15 05:56:56.817774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.325 [2024-12-15 05:56:56.817786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.325 [2024-12-15 05:56:56.832791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.325 [2024-12-15 05:56:56.832822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.325 [2024-12-15 05:56:56.832833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.325 [2024-12-15 05:56:56.847765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.325 [2024-12-15 05:56:56.847795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.325 [2024-12-15 05:56:56.847807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.325 [2024-12-15 05:56:56.864076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.325 [2024-12-15 05:56:56.864109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.325 [2024-12-15 05:56:56.864120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.325 [2024-12-15 05:56:56.879496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.325 [2024-12-15 05:56:56.879527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.325 [2024-12-15 05:56:56.879553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.325 [2024-12-15 05:56:56.894265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.325 [2024-12-15 05:56:56.894295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.325 [2024-12-15 05:56:56.894306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.325 [2024-12-15 05:56:56.910170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.325 [2024-12-15 05:56:56.910203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.325 [2024-12-15 05:56:56.910215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.325 [2024-12-15 05:56:56.925948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.325 [2024-12-15 05:56:56.925978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.325 [2024-12-15 05:56:56.925989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.325 [2024-12-15 05:56:56.941314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.325 [2024-12-15 05:56:56.941345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.325 [2024-12-15 05:56:56.941357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.325 [2024-12-15 05:56:56.956516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.325 [2024-12-15 05:56:56.956547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.325 [2024-12-15 05:56:56.956557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.585 [2024-12-15 05:56:56.972916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.585 [2024-12-15 05:56:56.972946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.585 [2024-12-15 05:56:56.972956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.585 [2024-12-15 05:56:56.988022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.585 [2024-12-15 05:56:56.988051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.585 [2024-12-15 05:56:56.988062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.585 [2024-12-15 05:56:57.002696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.585 [2024-12-15 05:56:57.002726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.585 [2024-12-15 05:56:57.002737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.585 [2024-12-15 05:56:57.017791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.585 [2024-12-15 05:56:57.017821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.585 [2024-12-15 05:56:57.017832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.585 [2024-12-15 05:56:57.032824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.585 [2024-12-15 05:56:57.032854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.585 [2024-12-15 05:56:57.032866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.585 [2024-12-15 05:56:57.048103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.585 [2024-12-15 05:56:57.048153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.585 [2024-12-15 05:56:57.048165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.585 [2024-12-15 05:56:57.064125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.585 [2024-12-15 05:56:57.064158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.585 [2024-12-15 05:56:57.064185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.585 [2024-12-15 05:56:57.080111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.585 [2024-12-15 05:56:57.080143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.585 [2024-12-15 05:56:57.080154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.585 [2024-12-15 05:56:57.096391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.585 [2024-12-15 05:56:57.096423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.585 [2024-12-15 05:56:57.096434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.585 [2024-12-15 05:56:57.113470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.585 [2024-12-15 05:56:57.113500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.585 [2024-12-15 05:56:57.113511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.585 [2024-12-15 05:56:57.129641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.585 [2024-12-15 05:56:57.129672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.585 [2024-12-15 05:56:57.129683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.585 [2024-12-15 05:56:57.144749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.585 [2024-12-15 05:56:57.144778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.585 [2024-12-15 05:56:57.144789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.585 [2024-12-15 05:56:57.159716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.585 [2024-12-15 05:56:57.159745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.585 [2024-12-15 05:56:57.159756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.585 [2024-12-15 05:56:57.174946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.585 [2024-12-15 05:56:57.174977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.585 [2024-12-15 05:56:57.174989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.585 [2024-12-15 05:56:57.190143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.585 [2024-12-15 05:56:57.190173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.585 [2024-12-15 05:56:57.190184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.585 [2024-12-15 05:56:57.205021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.585 [2024-12-15 05:56:57.205050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.585 [2024-12-15 05:56:57.205061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.585 [2024-12-15 05:56:57.221136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.585 [2024-12-15 05:56:57.221169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.585 [2024-12-15 05:56:57.221182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.845 [2024-12-15 05:56:57.238016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.845 [2024-12-15 05:56:57.238049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.845 [2024-12-15 05:56:57.238062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.845 [2024-12-15 05:56:57.254332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.845 [2024-12-15 05:56:57.254363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.845 [2024-12-15 05:56:57.254374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.845 [2024-12-15 05:56:57.270272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.845 [2024-12-15 05:56:57.270305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.845 [2024-12-15 05:56:57.270316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.845 [2024-12-15 05:56:57.286055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.845 [2024-12-15 05:56:57.286086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.845 [2024-12-15 05:56:57.286098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.845 [2024-12-15 05:56:57.301341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.845 [2024-12-15 05:56:57.301371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.845 [2024-12-15 05:56:57.301382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.845 [2024-12-15 05:56:57.317767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.845 [2024-12-15 05:56:57.317798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.845 [2024-12-15 05:56:57.317810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.845 [2024-12-15 05:56:57.333620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.845 [2024-12-15 05:56:57.333651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.845 [2024-12-15 05:56:57.333663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.845 [2024-12-15 05:56:57.349349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.845 [2024-12-15 05:56:57.349379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.845 [2024-12-15 05:56:57.349391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.845 [2024-12-15 05:56:57.365184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.845 [2024-12-15 05:56:57.365230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.845 [2024-12-15 05:56:57.365241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.845 [2024-12-15 05:56:57.381096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.845 [2024-12-15 05:56:57.381126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.845 [2024-12-15 05:56:57.381137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.845 [2024-12-15 05:56:57.396130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.845 [2024-12-15 05:56:57.396160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.845 [2024-12-15 05:56:57.396172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.845 [2024-12-15 05:56:57.411086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.845 [2024-12-15 05:56:57.411115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.845 [2024-12-15 05:56:57.411126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.845 [2024-12-15 05:56:57.427334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.845 [2024-12-15 05:56:57.427369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.845 [2024-12-15 05:56:57.427382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.845 [2024-12-15 05:56:57.444421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.845 [2024-12-15 05:56:57.444451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.845 [2024-12-15 05:56:57.444462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.845 [2024-12-15 05:56:57.460724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.845 [2024-12-15 05:56:57.460756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.845 [2024-12-15 05:56:57.460768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:35.846 [2024-12-15 05:56:57.478030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:35.846 [2024-12-15 05:56:57.478089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.846 [2024-12-15 05:56:57.478102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:36.105 [2024-12-15 05:56:57.495179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:36.105 [2024-12-15 05:56:57.495229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.105 [2024-12-15 05:56:57.495242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:36.105 [2024-12-15 05:56:57.510696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:36.105 [2024-12-15 05:56:57.510727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.105 [2024-12-15 05:56:57.510738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:36.105 [2024-12-15 05:56:57.526541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:36.105 [2024-12-15 05:56:57.526574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.105 [2024-12-15 05:56:57.526586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:36.105 [2024-12-15 05:56:57.542435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:36.105 [2024-12-15 05:56:57.542485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.105 [2024-12-15 05:56:57.542498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:36.105 [2024-12-15 05:56:57.558315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:36.106 [2024-12-15 05:56:57.558350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.106 [2024-12-15 05:56:57.558377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:36.106 [2024-12-15 05:56:57.575061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:36.106 [2024-12-15 05:56:57.575093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.106 [2024-12-15 05:56:57.575105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:36.106 [2024-12-15 05:56:57.590255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:36.106 [2024-12-15 05:56:57.590285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.106 [2024-12-15 05:56:57.590296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:36.106 [2024-12-15 05:56:57.605058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:36.106 [2024-12-15 05:56:57.605088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.106 [2024-12-15 05:56:57.605100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:36.106 [2024-12-15 05:56:57.620160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:36.106 [2024-12-15 05:56:57.620190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.106 [2024-12-15 05:56:57.620202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:36.106 [2024-12-15 05:56:57.635103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:36.106 [2024-12-15 05:56:57.635132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.106 [2024-12-15 05:56:57.635143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:36.106 [2024-12-15 05:56:57.650113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:36.106 [2024-12-15 05:56:57.650143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.106 [2024-12-15 05:56:57.650154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:36.106 [2024-12-15 05:56:57.665072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:36.106 [2024-12-15 05:56:57.665119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.106 [2024-12-15 05:56:57.665131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:36.106 [2024-12-15 05:56:57.680101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:36.106 [2024-12-15 05:56:57.680141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.106 [2024-12-15 05:56:57.680152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:36.106 [2024-12-15 05:56:57.694968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:36.106 [2024-12-15 05:56:57.694998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.106 [2024-12-15 05:56:57.695009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:36.106 [2024-12-15 05:56:57.709875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:36.106 [2024-12-15 05:56:57.709912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.106 [2024-12-15 05:56:57.709924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:36.106 [2024-12-15 05:56:57.724764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:36.106 [2024-12-15 05:56:57.724793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.106 [2024-12-15 05:56:57.724804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:36.106 [2024-12-15 05:56:57.740995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:36.106 [2024-12-15 05:56:57.741024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.106 [2024-12-15 05:56:57.741036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:36.365 [2024-12-15 05:56:57.757106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:36.365 [2024-12-15 05:56:57.757135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.365 [2024-12-15 05:56:57.757145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:36.365 [2024-12-15 05:56:57.773771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:36.365 [2024-12-15 05:56:57.773803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.365 [2024-12-15 05:56:57.773814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:36.365 [2024-12-15 05:56:57.791385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf49410) 00:16:36.365 [2024-12-15 05:56:57.791419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.365 [2024-12-15 05:56:57.791446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:36.365 00:16:36.365 Latency(us) 00:16:36.365 [2024-12-15T05:56:58.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:36.365 [2024-12-15T05:56:58.006Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:36.365 nvme0n1 : 2.01 16193.45 63.26 0.00 0.00 7899.37 7119.59 28597.53 00:16:36.365 [2024-12-15T05:56:58.006Z] =================================================================================================================== 00:16:36.365 [2024-12-15T05:56:58.006Z] Total : 16193.45 63.26 0.00 0.00 7899.37 7119.59 28597.53 00:16:36.365 0 00:16:36.365 05:56:57 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:36.365 05:56:57 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:36.365 | .driver_specific 00:16:36.365 | .nvme_error 00:16:36.365 | .status_code 00:16:36.365 | .command_transient_transport_error' 00:16:36.365 05:56:57 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:36.365 05:56:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:36.624 05:56:58 -- host/digest.sh@71 -- # (( 127 > 0 )) 00:16:36.624 05:56:58 -- host/digest.sh@73 -- # killprocess 83485 00:16:36.624 05:56:58 -- common/autotest_common.sh@936 -- # '[' -z 83485 ']' 00:16:36.624 05:56:58 -- common/autotest_common.sh@940 -- # kill -0 83485 00:16:36.624 05:56:58 -- common/autotest_common.sh@941 -- # uname 00:16:36.624 05:56:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:36.624 05:56:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83485 00:16:36.624 05:56:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:36.624 05:56:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:36.624 killing process with pid 83485 00:16:36.624 05:56:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83485' 00:16:36.624 05:56:58 -- common/autotest_common.sh@955 -- # kill 83485 00:16:36.624 Received shutdown signal, test time was about 2.000000 seconds 00:16:36.624 00:16:36.624 Latency(us) 00:16:36.624 [2024-12-15T05:56:58.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:36.624 [2024-12-15T05:56:58.265Z] =================================================================================================================== 00:16:36.624 [2024-12-15T05:56:58.265Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:36.624 05:56:58 -- common/autotest_common.sh@960 -- # wait 83485 00:16:36.884 05:56:58 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:16:36.884 05:56:58 -- host/digest.sh@54 -- # local rw bs qd 00:16:36.884 05:56:58 -- host/digest.sh@56 -- # rw=randread 00:16:36.884 05:56:58 -- host/digest.sh@56 -- # bs=131072 00:16:36.884 05:56:58 -- host/digest.sh@56 -- # qd=16 00:16:36.884 05:56:58 -- host/digest.sh@58 -- # bperfpid=83534 00:16:36.884 05:56:58 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:16:36.884 05:56:58 -- host/digest.sh@60 -- # waitforlisten 83534 /var/tmp/bperf.sock 00:16:36.884 05:56:58 -- common/autotest_common.sh@829 -- # '[' -z 83534 ']' 00:16:36.884 05:56:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:36.884 05:56:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:36.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:36.884 05:56:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:36.884 05:56:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:36.884 05:56:58 -- common/autotest_common.sh@10 -- # set +x 00:16:36.884 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:36.884 Zero copy mechanism will not be used. 00:16:36.884 [2024-12-15 05:56:58.314134] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:36.884 [2024-12-15 05:56:58.314257] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83534 ] 00:16:36.884 [2024-12-15 05:56:58.447567] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.884 [2024-12-15 05:56:58.479642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.820 05:56:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:37.820 05:56:59 -- common/autotest_common.sh@862 -- # return 0 00:16:37.820 05:56:59 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:37.820 05:56:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:38.079 05:56:59 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:38.079 05:56:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.079 05:56:59 -- common/autotest_common.sh@10 -- # set +x 00:16:38.079 05:56:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.079 05:56:59 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:38.079 05:56:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:38.350 nvme0n1 00:16:38.350 05:56:59 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:38.350 05:56:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.350 05:56:59 -- common/autotest_common.sh@10 -- # set +x 00:16:38.350 05:56:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.350 05:56:59 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:38.350 05:56:59 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:38.350 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:38.350 Zero copy mechanism will not be used. 00:16:38.350 Running I/O for 2 seconds... 00:16:38.350 [2024-12-15 05:56:59.906755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.350 [2024-12-15 05:56:59.906815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.350 [2024-12-15 05:56:59.906830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.350 [2024-12-15 05:56:59.910933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.350 [2024-12-15 05:56:59.910963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.350 [2024-12-15 05:56:59.910975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.350 [2024-12-15 05:56:59.915207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.350 [2024-12-15 05:56:59.915259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.350 [2024-12-15 05:56:59.915273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.350 [2024-12-15 05:56:59.919240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.350 [2024-12-15 05:56:59.919273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.350 [2024-12-15 05:56:59.919287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.350 [2024-12-15 05:56:59.923127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.350 [2024-12-15 05:56:59.923199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.350 [2024-12-15 05:56:59.923212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.350 [2024-12-15 05:56:59.927179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.350 [2024-12-15 05:56:59.927229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.350 [2024-12-15 05:56:59.927242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.350 [2024-12-15 05:56:59.931124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.350 [2024-12-15 05:56:59.931195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.350 [2024-12-15 05:56:59.931208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.350 [2024-12-15 05:56:59.935144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.350 [2024-12-15 05:56:59.935216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.350 [2024-12-15 05:56:59.935233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.350 [2024-12-15 05:56:59.939155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.350 [2024-12-15 05:56:59.939219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.350 [2024-12-15 05:56:59.939233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.350 [2024-12-15 05:56:59.943249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.350 [2024-12-15 05:56:59.943296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.350 [2024-12-15 05:56:59.943308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.350 [2024-12-15 05:56:59.947085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.350 [2024-12-15 05:56:59.947131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.350 [2024-12-15 05:56:59.947144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.350 [2024-12-15 05:56:59.951015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.350 [2024-12-15 05:56:59.951061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.350 [2024-12-15 05:56:59.951073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.350 [2024-12-15 05:56:59.954882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.350 [2024-12-15 05:56:59.954938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.350 [2024-12-15 05:56:59.954949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.350 [2024-12-15 05:56:59.958653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.350 [2024-12-15 05:56:59.958700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.350 [2024-12-15 05:56:59.958712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.350 [2024-12-15 05:56:59.962571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.350 [2024-12-15 05:56:59.962618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.350 [2024-12-15 05:56:59.962630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.350 [2024-12-15 05:56:59.966577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.350 [2024-12-15 05:56:59.966624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.350 [2024-12-15 05:56:59.966636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.350 [2024-12-15 05:56:59.970791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.350 [2024-12-15 05:56:59.970838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.350 [2024-12-15 05:56:59.970849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.350 [2024-12-15 05:56:59.974953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.350 [2024-12-15 05:56:59.975008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.350 [2024-12-15 05:56:59.975021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.350 [2024-12-15 05:56:59.979102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.350 [2024-12-15 05:56:59.979158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.351 [2024-12-15 05:56:59.979186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.351 [2024-12-15 05:56:59.983529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.351 [2024-12-15 05:56:59.983592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.351 [2024-12-15 05:56:59.983603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.351 [2024-12-15 05:56:59.987692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.351 [2024-12-15 05:56:59.987756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.351 [2024-12-15 05:56:59.987785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.612 [2024-12-15 05:56:59.992116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.612 [2024-12-15 05:56:59.992163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.612 [2024-12-15 05:56:59.992175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.612 [2024-12-15 05:56:59.996261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.612 [2024-12-15 05:56:59.996307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.612 [2024-12-15 05:56:59.996320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.612 [2024-12-15 05:57:00.000173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.612 [2024-12-15 05:57:00.000219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.612 [2024-12-15 05:57:00.000231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.612 [2024-12-15 05:57:00.004550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.612 [2024-12-15 05:57:00.004600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.612 [2024-12-15 05:57:00.004614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.612 [2024-12-15 05:57:00.008995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.612 [2024-12-15 05:57:00.009029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.612 [2024-12-15 05:57:00.009042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.612 [2024-12-15 05:57:00.013398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.612 [2024-12-15 05:57:00.013432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.612 [2024-12-15 05:57:00.013445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.612 [2024-12-15 05:57:00.017616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.612 [2024-12-15 05:57:00.017666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.612 [2024-12-15 05:57:00.017680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.612 [2024-12-15 05:57:00.022025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.612 [2024-12-15 05:57:00.022076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.022089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.026583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.026633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.026646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.031684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.031748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.031770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.036261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.036311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.036324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.040375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.040424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.040436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.044740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.044789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.044802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.048947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.048994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.049006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.052936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.052984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.052996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.056862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.056935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.056947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.060792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.060838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.060850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.064675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.064722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.064733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.068760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.068808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.068819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.072762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.072808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.072820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.076783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.076831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.076843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.080712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.080758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.080771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.085189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.085240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.085253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.089497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.089544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.089556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.093695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.093743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.093755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.097995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.098042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.098054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.101983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.102030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.102042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.105965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.106012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.106024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.109815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.109862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.109874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.114220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.114266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.114277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.118562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.118603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.118614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.122428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.122475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.122486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.126540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.126586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.126598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.131033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.131082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.131094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.135061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.135108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.135121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.139042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.139088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.139100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.143095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.143143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.143181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.147366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.147401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.147414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.151770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.151829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.151842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.156100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.156147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.156159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.160537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.160584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.160595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.164917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.164978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.164991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.169281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.169328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.169339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.173490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.173536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.173548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.177926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.177986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.178000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.182002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.182051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.182064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.186193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.186241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.186253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.190407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.190454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.190466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.194436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.194483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.194495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.198390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.198437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.198448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.202272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.202317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.202329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.206142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.206189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.206215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.210083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.210132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.210145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.214063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.214110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.214123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.217945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.217992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.218005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.221852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.221910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.221923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.225775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.225837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.225849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.229741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.613 [2024-12-15 05:57:00.229787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.613 [2024-12-15 05:57:00.229815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.613 [2024-12-15 05:57:00.234027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.614 [2024-12-15 05:57:00.234075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.614 [2024-12-15 05:57:00.234088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.614 [2024-12-15 05:57:00.238265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.614 [2024-12-15 05:57:00.238327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.614 [2024-12-15 05:57:00.238339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.614 [2024-12-15 05:57:00.242246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.614 [2024-12-15 05:57:00.242294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.614 [2024-12-15 05:57:00.242321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.614 [2024-12-15 05:57:00.246701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.614 [2024-12-15 05:57:00.246749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.614 [2024-12-15 05:57:00.246761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.875 [2024-12-15 05:57:00.251092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.875 [2024-12-15 05:57:00.251139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.875 [2024-12-15 05:57:00.251176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.875 [2024-12-15 05:57:00.255478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.875 [2024-12-15 05:57:00.255542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.875 [2024-12-15 05:57:00.255569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.875 [2024-12-15 05:57:00.259464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.875 [2024-12-15 05:57:00.259541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.875 [2024-12-15 05:57:00.259584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.875 [2024-12-15 05:57:00.263686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.875 [2024-12-15 05:57:00.263730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.875 [2024-12-15 05:57:00.263742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.875 [2024-12-15 05:57:00.267822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.875 [2024-12-15 05:57:00.267870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.875 [2024-12-15 05:57:00.267882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.875 [2024-12-15 05:57:00.271871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.875 [2024-12-15 05:57:00.271926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.875 [2024-12-15 05:57:00.271938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.875 [2024-12-15 05:57:00.275731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.875 [2024-12-15 05:57:00.275778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.875 [2024-12-15 05:57:00.275790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.875 [2024-12-15 05:57:00.279758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.875 [2024-12-15 05:57:00.279823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.875 [2024-12-15 05:57:00.279853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.875 [2024-12-15 05:57:00.283970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.875 [2024-12-15 05:57:00.284016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.875 [2024-12-15 05:57:00.284028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.875 [2024-12-15 05:57:00.287986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.875 [2024-12-15 05:57:00.288033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.875 [2024-12-15 05:57:00.288045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.875 [2024-12-15 05:57:00.292097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.875 [2024-12-15 05:57:00.292143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.875 [2024-12-15 05:57:00.292170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.875 [2024-12-15 05:57:00.296295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.875 [2024-12-15 05:57:00.296342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.875 [2024-12-15 05:57:00.296355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.875 [2024-12-15 05:57:00.300560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.875 [2024-12-15 05:57:00.300609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.875 [2024-12-15 05:57:00.300636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.875 [2024-12-15 05:57:00.304729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.875 [2024-12-15 05:57:00.304777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.875 [2024-12-15 05:57:00.304804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.875 [2024-12-15 05:57:00.308861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.875 [2024-12-15 05:57:00.308917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.875 [2024-12-15 05:57:00.308930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.875 [2024-12-15 05:57:00.312734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.875 [2024-12-15 05:57:00.312781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.875 [2024-12-15 05:57:00.312809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.875 [2024-12-15 05:57:00.317022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.875 [2024-12-15 05:57:00.317057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.875 [2024-12-15 05:57:00.317070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.875 [2024-12-15 05:57:00.321036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.875 [2024-12-15 05:57:00.321068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.875 [2024-12-15 05:57:00.321080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.875 [2024-12-15 05:57:00.325263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.875 [2024-12-15 05:57:00.325296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.875 [2024-12-15 05:57:00.325309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.875 [2024-12-15 05:57:00.329402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.875 [2024-12-15 05:57:00.329434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.875 [2024-12-15 05:57:00.329446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.875 [2024-12-15 05:57:00.333486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.875 [2024-12-15 05:57:00.333518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.875 [2024-12-15 05:57:00.333530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.875 [2024-12-15 05:57:00.337542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.875 [2024-12-15 05:57:00.337574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.875 [2024-12-15 05:57:00.337586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.875 [2024-12-15 05:57:00.341630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.875 [2024-12-15 05:57:00.341663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.875 [2024-12-15 05:57:00.341675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.875 [2024-12-15 05:57:00.345815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.875 [2024-12-15 05:57:00.345848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.875 [2024-12-15 05:57:00.345859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.875 [2024-12-15 05:57:00.349797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.875 [2024-12-15 05:57:00.349845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.875 [2024-12-15 05:57:00.349856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.875 [2024-12-15 05:57:00.353929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.876 [2024-12-15 05:57:00.353976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.876 [2024-12-15 05:57:00.353988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.876 [2024-12-15 05:57:00.357851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.876 [2024-12-15 05:57:00.357922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.876 [2024-12-15 05:57:00.357935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.876 [2024-12-15 05:57:00.361821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.876 [2024-12-15 05:57:00.361867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.876 [2024-12-15 05:57:00.361879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.876 [2024-12-15 05:57:00.365769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.876 [2024-12-15 05:57:00.365816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.876 [2024-12-15 05:57:00.365828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.876 [2024-12-15 05:57:00.370058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.876 [2024-12-15 05:57:00.370090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.876 [2024-12-15 05:57:00.370119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.876 [2024-12-15 05:57:00.374442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.876 [2024-12-15 05:57:00.374475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.876 [2024-12-15 05:57:00.374487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.876 [2024-12-15 05:57:00.378498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.876 [2024-12-15 05:57:00.378531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.876 [2024-12-15 05:57:00.378543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.876 [2024-12-15 05:57:00.382394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.876 [2024-12-15 05:57:00.382440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.876 [2024-12-15 05:57:00.382451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.876 [2024-12-15 05:57:00.386341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.876 [2024-12-15 05:57:00.386388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.876 [2024-12-15 05:57:00.386400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.876 [2024-12-15 05:57:00.390241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.876 [2024-12-15 05:57:00.390287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.876 [2024-12-15 05:57:00.390314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.876 [2024-12-15 05:57:00.394456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.876 [2024-12-15 05:57:00.394502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.876 [2024-12-15 05:57:00.394515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.876 [2024-12-15 05:57:00.398816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.876 [2024-12-15 05:57:00.398864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.876 [2024-12-15 05:57:00.398877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.876 [2024-12-15 05:57:00.403206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.876 [2024-12-15 05:57:00.403242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.876 [2024-12-15 05:57:00.403255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.876 [2024-12-15 05:57:00.407254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.876 [2024-12-15 05:57:00.407289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.876 [2024-12-15 05:57:00.407302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.876 [2024-12-15 05:57:00.411328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.876 [2024-12-15 05:57:00.411363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.876 [2024-12-15 05:57:00.411376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.876 [2024-12-15 05:57:00.415547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.876 [2024-12-15 05:57:00.415595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.876 [2024-12-15 05:57:00.415608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.876 [2024-12-15 05:57:00.419969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.876 [2024-12-15 05:57:00.420017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.876 [2024-12-15 05:57:00.420029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.876 [2024-12-15 05:57:00.424209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.876 [2024-12-15 05:57:00.424258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.876 [2024-12-15 05:57:00.424271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.876 [2024-12-15 05:57:00.428397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.876 [2024-12-15 05:57:00.428445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.876 [2024-12-15 05:57:00.428457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.876 [2024-12-15 05:57:00.432533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.876 [2024-12-15 05:57:00.432581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.876 [2024-12-15 05:57:00.432594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.876 [2024-12-15 05:57:00.436707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.876 [2024-12-15 05:57:00.436755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.876 [2024-12-15 05:57:00.436767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.876 [2024-12-15 05:57:00.440814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.876 [2024-12-15 05:57:00.440862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.876 [2024-12-15 05:57:00.440875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.876 [2024-12-15 05:57:00.444854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.876 [2024-12-15 05:57:00.444911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.876 [2024-12-15 05:57:00.444925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.876 [2024-12-15 05:57:00.448875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.876 [2024-12-15 05:57:00.448933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.876 [2024-12-15 05:57:00.448946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.876 [2024-12-15 05:57:00.452923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.876 [2024-12-15 05:57:00.452970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.876 [2024-12-15 05:57:00.452982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.876 [2024-12-15 05:57:00.456915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.876 [2024-12-15 05:57:00.456962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.876 [2024-12-15 05:57:00.456974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.876 [2024-12-15 05:57:00.460884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.876 [2024-12-15 05:57:00.460941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.876 [2024-12-15 05:57:00.460954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.876 [2024-12-15 05:57:00.464838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.877 [2024-12-15 05:57:00.464911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.877 [2024-12-15 05:57:00.464925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.877 [2024-12-15 05:57:00.469002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.877 [2024-12-15 05:57:00.469049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.877 [2024-12-15 05:57:00.469061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.877 [2024-12-15 05:57:00.472930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.877 [2024-12-15 05:57:00.472985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.877 [2024-12-15 05:57:00.472998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.877 [2024-12-15 05:57:00.477098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.877 [2024-12-15 05:57:00.477146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.877 [2024-12-15 05:57:00.477159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.877 [2024-12-15 05:57:00.481247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.877 [2024-12-15 05:57:00.481295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.877 [2024-12-15 05:57:00.481324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.877 [2024-12-15 05:57:00.485449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.877 [2024-12-15 05:57:00.485496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.877 [2024-12-15 05:57:00.485508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.877 [2024-12-15 05:57:00.489516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.877 [2024-12-15 05:57:00.489564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.877 [2024-12-15 05:57:00.489576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.877 [2024-12-15 05:57:00.493926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.877 [2024-12-15 05:57:00.493971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.877 [2024-12-15 05:57:00.493985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:38.877 [2024-12-15 05:57:00.498286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.877 [2024-12-15 05:57:00.498322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.877 [2024-12-15 05:57:00.498336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:38.877 [2024-12-15 05:57:00.502596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.877 [2024-12-15 05:57:00.502629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.877 [2024-12-15 05:57:00.502642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:38.877 [2024-12-15 05:57:00.506824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.877 [2024-12-15 05:57:00.506858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.877 [2024-12-15 05:57:00.506898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:38.877 [2024-12-15 05:57:00.511176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:38.877 [2024-12-15 05:57:00.511209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.877 [2024-12-15 05:57:00.511222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.138 [2024-12-15 05:57:00.515559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.138 [2024-12-15 05:57:00.515606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.138 [2024-12-15 05:57:00.515618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.138 [2024-12-15 05:57:00.519990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.138 [2024-12-15 05:57:00.520036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.138 [2024-12-15 05:57:00.520048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.138 [2024-12-15 05:57:00.523957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.138 [2024-12-15 05:57:00.524003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.138 [2024-12-15 05:57:00.524015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.138 [2024-12-15 05:57:00.527911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.138 [2024-12-15 05:57:00.527969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.138 [2024-12-15 05:57:00.527981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.138 [2024-12-15 05:57:00.531928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.138 [2024-12-15 05:57:00.531986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.138 [2024-12-15 05:57:00.531998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.138 [2024-12-15 05:57:00.535958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.138 [2024-12-15 05:57:00.536004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.138 [2024-12-15 05:57:00.536016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.138 [2024-12-15 05:57:00.540037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.138 [2024-12-15 05:57:00.540084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.138 [2024-12-15 05:57:00.540096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.138 [2024-12-15 05:57:00.544070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.138 [2024-12-15 05:57:00.544117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.138 [2024-12-15 05:57:00.544129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.138 [2024-12-15 05:57:00.548233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.138 [2024-12-15 05:57:00.548281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.138 [2024-12-15 05:57:00.548294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.138 [2024-12-15 05:57:00.552265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.138 [2024-12-15 05:57:00.552311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.138 [2024-12-15 05:57:00.552324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.138 [2024-12-15 05:57:00.556282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.138 [2024-12-15 05:57:00.556328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.138 [2024-12-15 05:57:00.556340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.138 [2024-12-15 05:57:00.560339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.138 [2024-12-15 05:57:00.560386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.138 [2024-12-15 05:57:00.560413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.138 [2024-12-15 05:57:00.564360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.138 [2024-12-15 05:57:00.564407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.138 [2024-12-15 05:57:00.564419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.138 [2024-12-15 05:57:00.568317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.138 [2024-12-15 05:57:00.568364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.138 [2024-12-15 05:57:00.568392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.138 [2024-12-15 05:57:00.572436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.138 [2024-12-15 05:57:00.572483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.138 [2024-12-15 05:57:00.572495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.138 [2024-12-15 05:57:00.576630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.138 [2024-12-15 05:57:00.576663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.138 [2024-12-15 05:57:00.576675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.138 [2024-12-15 05:57:00.580745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.138 [2024-12-15 05:57:00.580779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.138 [2024-12-15 05:57:00.580792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.138 [2024-12-15 05:57:00.585024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.138 [2024-12-15 05:57:00.585058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.138 [2024-12-15 05:57:00.585070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.138 [2024-12-15 05:57:00.589058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.138 [2024-12-15 05:57:00.589092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.138 [2024-12-15 05:57:00.589106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.138 [2024-12-15 05:57:00.593124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.138 [2024-12-15 05:57:00.593159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.138 [2024-12-15 05:57:00.593171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.138 [2024-12-15 05:57:00.597242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.138 [2024-12-15 05:57:00.597275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.138 [2024-12-15 05:57:00.597287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.139 [2024-12-15 05:57:00.601480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.139 [2024-12-15 05:57:00.601514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.139 [2024-12-15 05:57:00.601527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.139 [2024-12-15 05:57:00.605629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.139 [2024-12-15 05:57:00.605677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.139 [2024-12-15 05:57:00.605689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.139 [2024-12-15 05:57:00.609698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.139 [2024-12-15 05:57:00.609746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.139 [2024-12-15 05:57:00.609757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.139 [2024-12-15 05:57:00.613721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.139 [2024-12-15 05:57:00.613768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.139 [2024-12-15 05:57:00.613795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.139 [2024-12-15 05:57:00.617832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.139 [2024-12-15 05:57:00.617879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.139 [2024-12-15 05:57:00.617901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.139 [2024-12-15 05:57:00.621903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.139 [2024-12-15 05:57:00.621950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.139 [2024-12-15 05:57:00.621961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.139 [2024-12-15 05:57:00.625988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.139 [2024-12-15 05:57:00.626035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.139 [2024-12-15 05:57:00.626047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.139 [2024-12-15 05:57:00.630553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.139 [2024-12-15 05:57:00.630602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.139 [2024-12-15 05:57:00.630615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.139 [2024-12-15 05:57:00.635334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.139 [2024-12-15 05:57:00.635369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.139 [2024-12-15 05:57:00.635383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.139 [2024-12-15 05:57:00.639744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.139 [2024-12-15 05:57:00.639793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.139 [2024-12-15 05:57:00.639805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.139 [2024-12-15 05:57:00.644071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.139 [2024-12-15 05:57:00.644105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.139 [2024-12-15 05:57:00.644119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.139 [2024-12-15 05:57:00.648591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.139 [2024-12-15 05:57:00.648640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.139 [2024-12-15 05:57:00.648652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.139 [2024-12-15 05:57:00.653029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.139 [2024-12-15 05:57:00.653062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.139 [2024-12-15 05:57:00.653075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.139 [2024-12-15 05:57:00.657376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.139 [2024-12-15 05:57:00.657424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.139 [2024-12-15 05:57:00.657436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.139 [2024-12-15 05:57:00.661659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.139 [2024-12-15 05:57:00.661707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.139 [2024-12-15 05:57:00.661719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.139 [2024-12-15 05:57:00.665759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.139 [2024-12-15 05:57:00.665808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.139 [2024-12-15 05:57:00.665820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.139 [2024-12-15 05:57:00.670029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.139 [2024-12-15 05:57:00.670076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.139 [2024-12-15 05:57:00.670089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.139 [2024-12-15 05:57:00.674063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.139 [2024-12-15 05:57:00.674111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.139 [2024-12-15 05:57:00.674123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.139 [2024-12-15 05:57:00.678199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.139 [2024-12-15 05:57:00.678246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.139 [2024-12-15 05:57:00.678259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.139 [2024-12-15 05:57:00.682325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.139 [2024-12-15 05:57:00.682373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.139 [2024-12-15 05:57:00.682386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.139 [2024-12-15 05:57:00.686439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.139 [2024-12-15 05:57:00.686487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.139 [2024-12-15 05:57:00.686499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.139 [2024-12-15 05:57:00.690525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.139 [2024-12-15 05:57:00.690573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.139 [2024-12-15 05:57:00.690587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.139 [2024-12-15 05:57:00.694559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.139 [2024-12-15 05:57:00.694607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.139 [2024-12-15 05:57:00.694619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.139 [2024-12-15 05:57:00.698638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.139 [2024-12-15 05:57:00.698687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.139 [2024-12-15 05:57:00.698699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.139 [2024-12-15 05:57:00.702790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.139 [2024-12-15 05:57:00.702840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.139 [2024-12-15 05:57:00.702852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.139 [2024-12-15 05:57:00.707058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.139 [2024-12-15 05:57:00.707108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.139 [2024-12-15 05:57:00.707120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.139 [2024-12-15 05:57:00.711073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.139 [2024-12-15 05:57:00.711121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.139 [2024-12-15 05:57:00.711133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.139 [2024-12-15 05:57:00.715126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.139 [2024-12-15 05:57:00.715199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.140 [2024-12-15 05:57:00.715213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.140 [2024-12-15 05:57:00.719365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.140 [2024-12-15 05:57:00.719399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.140 [2024-12-15 05:57:00.719413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.140 [2024-12-15 05:57:00.723496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.140 [2024-12-15 05:57:00.723544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.140 [2024-12-15 05:57:00.723571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.140 [2024-12-15 05:57:00.727681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.140 [2024-12-15 05:57:00.727730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.140 [2024-12-15 05:57:00.727742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.140 [2024-12-15 05:57:00.731929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.140 [2024-12-15 05:57:00.731977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.140 [2024-12-15 05:57:00.731990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.140 [2024-12-15 05:57:00.736152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.140 [2024-12-15 05:57:00.736200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.140 [2024-12-15 05:57:00.736213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.140 [2024-12-15 05:57:00.740270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.140 [2024-12-15 05:57:00.740305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.140 [2024-12-15 05:57:00.740318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.140 [2024-12-15 05:57:00.744622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.140 [2024-12-15 05:57:00.744670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.140 [2024-12-15 05:57:00.744683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.140 [2024-12-15 05:57:00.749108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.140 [2024-12-15 05:57:00.749157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.140 [2024-12-15 05:57:00.749185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.140 [2024-12-15 05:57:00.753606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.140 [2024-12-15 05:57:00.753656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.140 [2024-12-15 05:57:00.753683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.140 [2024-12-15 05:57:00.757966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.140 [2024-12-15 05:57:00.758016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.140 [2024-12-15 05:57:00.758029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.140 [2024-12-15 05:57:00.762253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.140 [2024-12-15 05:57:00.762301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.140 [2024-12-15 05:57:00.762328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.140 [2024-12-15 05:57:00.766513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.140 [2024-12-15 05:57:00.766560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.140 [2024-12-15 05:57:00.766572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.140 [2024-12-15 05:57:00.770755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.140 [2024-12-15 05:57:00.770790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.140 [2024-12-15 05:57:00.770803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.401 [2024-12-15 05:57:00.775230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.401 [2024-12-15 05:57:00.775264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.401 [2024-12-15 05:57:00.775277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.401 [2024-12-15 05:57:00.779665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.401 [2024-12-15 05:57:00.779712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.401 [2024-12-15 05:57:00.779725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.401 [2024-12-15 05:57:00.784190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.401 [2024-12-15 05:57:00.784240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.401 [2024-12-15 05:57:00.784268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.401 [2024-12-15 05:57:00.788306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.401 [2024-12-15 05:57:00.788352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.401 [2024-12-15 05:57:00.788365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.401 [2024-12-15 05:57:00.792385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.401 [2024-12-15 05:57:00.792432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.401 [2024-12-15 05:57:00.792444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.401 [2024-12-15 05:57:00.796522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.401 [2024-12-15 05:57:00.796570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.401 [2024-12-15 05:57:00.796582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.401 [2024-12-15 05:57:00.800653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.401 [2024-12-15 05:57:00.800701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.401 [2024-12-15 05:57:00.800713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.401 [2024-12-15 05:57:00.804879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.401 [2024-12-15 05:57:00.804937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.401 [2024-12-15 05:57:00.804950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.401 [2024-12-15 05:57:00.808882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.401 [2024-12-15 05:57:00.808957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.401 [2024-12-15 05:57:00.808970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.401 [2024-12-15 05:57:00.813067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.401 [2024-12-15 05:57:00.813116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.401 [2024-12-15 05:57:00.813129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.401 [2024-12-15 05:57:00.817087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.401 [2024-12-15 05:57:00.817133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.401 [2024-12-15 05:57:00.817146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.401 [2024-12-15 05:57:00.821210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.401 [2024-12-15 05:57:00.821257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.401 [2024-12-15 05:57:00.821269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.401 [2024-12-15 05:57:00.825218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.401 [2024-12-15 05:57:00.825264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.401 [2024-12-15 05:57:00.825277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.401 [2024-12-15 05:57:00.829411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.401 [2024-12-15 05:57:00.829458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.401 [2024-12-15 05:57:00.829471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.401 [2024-12-15 05:57:00.833772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.401 [2024-12-15 05:57:00.833821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.401 [2024-12-15 05:57:00.833834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.401 [2024-12-15 05:57:00.838336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.401 [2024-12-15 05:57:00.838369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.401 [2024-12-15 05:57:00.838382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.401 [2024-12-15 05:57:00.842660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.401 [2024-12-15 05:57:00.842720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.401 [2024-12-15 05:57:00.842733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.401 [2024-12-15 05:57:00.847158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.401 [2024-12-15 05:57:00.847191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.401 [2024-12-15 05:57:00.847205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.401 [2024-12-15 05:57:00.851534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.401 [2024-12-15 05:57:00.851581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.401 [2024-12-15 05:57:00.851593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.401 [2024-12-15 05:57:00.855921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.401 [2024-12-15 05:57:00.855981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.401 [2024-12-15 05:57:00.855994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.401 [2024-12-15 05:57:00.860400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.401 [2024-12-15 05:57:00.860448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.401 [2024-12-15 05:57:00.860460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.401 [2024-12-15 05:57:00.864550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.401 [2024-12-15 05:57:00.864598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.401 [2024-12-15 05:57:00.864610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.401 [2024-12-15 05:57:00.868816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.401 [2024-12-15 05:57:00.868864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.401 [2024-12-15 05:57:00.868877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.401 [2024-12-15 05:57:00.873068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.401 [2024-12-15 05:57:00.873116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.401 [2024-12-15 05:57:00.873129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.401 [2024-12-15 05:57:00.877222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.401 [2024-12-15 05:57:00.877271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.401 [2024-12-15 05:57:00.877283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.401 [2024-12-15 05:57:00.881337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.401 [2024-12-15 05:57:00.881386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.401 [2024-12-15 05:57:00.881398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.401 [2024-12-15 05:57:00.885690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.401 [2024-12-15 05:57:00.885739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.402 [2024-12-15 05:57:00.885752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.402 [2024-12-15 05:57:00.890318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.402 [2024-12-15 05:57:00.890367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.402 [2024-12-15 05:57:00.890394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.402 [2024-12-15 05:57:00.894919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.402 [2024-12-15 05:57:00.894964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.402 [2024-12-15 05:57:00.894978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.402 [2024-12-15 05:57:00.899428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.402 [2024-12-15 05:57:00.899478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.402 [2024-12-15 05:57:00.899491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.402 [2024-12-15 05:57:00.904066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.402 [2024-12-15 05:57:00.904132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.402 [2024-12-15 05:57:00.904180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.402 [2024-12-15 05:57:00.908593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.402 [2024-12-15 05:57:00.908642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.402 [2024-12-15 05:57:00.908654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.402 [2024-12-15 05:57:00.913299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.402 [2024-12-15 05:57:00.913362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.402 [2024-12-15 05:57:00.913376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.402 [2024-12-15 05:57:00.917941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.402 [2024-12-15 05:57:00.918002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.402 [2024-12-15 05:57:00.918016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.402 [2024-12-15 05:57:00.922203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.402 [2024-12-15 05:57:00.922251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.402 [2024-12-15 05:57:00.922264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.402 [2024-12-15 05:57:00.926423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.402 [2024-12-15 05:57:00.926472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.402 [2024-12-15 05:57:00.926485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.402 [2024-12-15 05:57:00.930742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.402 [2024-12-15 05:57:00.930792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.402 [2024-12-15 05:57:00.930804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.402 [2024-12-15 05:57:00.934841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.402 [2024-12-15 05:57:00.934898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.402 [2024-12-15 05:57:00.934912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.402 [2024-12-15 05:57:00.938950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.402 [2024-12-15 05:57:00.938999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.402 [2024-12-15 05:57:00.939012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.402 [2024-12-15 05:57:00.943375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.402 [2024-12-15 05:57:00.943409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.402 [2024-12-15 05:57:00.943423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.402 [2024-12-15 05:57:00.947709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.402 [2024-12-15 05:57:00.947773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.402 [2024-12-15 05:57:00.947818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.402 [2024-12-15 05:57:00.951849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.402 [2024-12-15 05:57:00.951924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.402 [2024-12-15 05:57:00.951937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.402 [2024-12-15 05:57:00.956341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.402 [2024-12-15 05:57:00.956391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.402 [2024-12-15 05:57:00.956403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.402 [2024-12-15 05:57:00.960677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.402 [2024-12-15 05:57:00.960727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.402 [2024-12-15 05:57:00.960739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.402 [2024-12-15 05:57:00.964903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.402 [2024-12-15 05:57:00.964950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.402 [2024-12-15 05:57:00.964963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.402 [2024-12-15 05:57:00.969233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.402 [2024-12-15 05:57:00.969284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.402 [2024-12-15 05:57:00.969298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.402 [2024-12-15 05:57:00.973648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.402 [2024-12-15 05:57:00.973697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.402 [2024-12-15 05:57:00.973710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.402 [2024-12-15 05:57:00.978137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.402 [2024-12-15 05:57:00.978185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.402 [2024-12-15 05:57:00.978198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.402 [2024-12-15 05:57:00.982421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.402 [2024-12-15 05:57:00.982469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.402 [2024-12-15 05:57:00.982482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.402 [2024-12-15 05:57:00.986720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.402 [2024-12-15 05:57:00.986768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.402 [2024-12-15 05:57:00.986798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.402 [2024-12-15 05:57:00.991073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.402 [2024-12-15 05:57:00.991138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.402 [2024-12-15 05:57:00.991175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.402 [2024-12-15 05:57:00.995131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.402 [2024-12-15 05:57:00.995185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.402 [2024-12-15 05:57:00.995198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.402 [2024-12-15 05:57:00.999498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.402 [2024-12-15 05:57:00.999549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.402 [2024-12-15 05:57:00.999575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.402 [2024-12-15 05:57:01.003575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.402 [2024-12-15 05:57:01.003623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.402 [2024-12-15 05:57:01.003636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.403 [2024-12-15 05:57:01.007657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.403 [2024-12-15 05:57:01.007705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.403 [2024-12-15 05:57:01.007717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.403 [2024-12-15 05:57:01.011913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.403 [2024-12-15 05:57:01.011973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.403 [2024-12-15 05:57:01.011986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.403 [2024-12-15 05:57:01.015965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.403 [2024-12-15 05:57:01.016013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.403 [2024-12-15 05:57:01.016025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.403 [2024-12-15 05:57:01.020118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.403 [2024-12-15 05:57:01.020166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.403 [2024-12-15 05:57:01.020179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.403 [2024-12-15 05:57:01.024514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.403 [2024-12-15 05:57:01.024563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.403 [2024-12-15 05:57:01.024576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.403 [2024-12-15 05:57:01.028572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.403 [2024-12-15 05:57:01.028621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.403 [2024-12-15 05:57:01.028633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.403 [2024-12-15 05:57:01.032713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.403 [2024-12-15 05:57:01.032762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.403 [2024-12-15 05:57:01.032775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.403 [2024-12-15 05:57:01.037328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.403 [2024-12-15 05:57:01.037378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.403 [2024-12-15 05:57:01.037391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.663 [2024-12-15 05:57:01.041751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.663 [2024-12-15 05:57:01.041815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.663 [2024-12-15 05:57:01.041828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.663 [2024-12-15 05:57:01.046258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.664 [2024-12-15 05:57:01.046305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.664 [2024-12-15 05:57:01.046317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.664 [2024-12-15 05:57:01.050351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.664 [2024-12-15 05:57:01.050399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.664 [2024-12-15 05:57:01.050412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.664 [2024-12-15 05:57:01.054376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.664 [2024-12-15 05:57:01.054423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.664 [2024-12-15 05:57:01.054435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.664 [2024-12-15 05:57:01.058337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.664 [2024-12-15 05:57:01.058384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.664 [2024-12-15 05:57:01.058396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.664 [2024-12-15 05:57:01.062310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.664 [2024-12-15 05:57:01.062356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.664 [2024-12-15 05:57:01.062368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.664 [2024-12-15 05:57:01.066287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.664 [2024-12-15 05:57:01.066333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.664 [2024-12-15 05:57:01.066345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.664 [2024-12-15 05:57:01.070357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.664 [2024-12-15 05:57:01.070404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.664 [2024-12-15 05:57:01.070417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.664 [2024-12-15 05:57:01.074421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.664 [2024-12-15 05:57:01.074469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.664 [2024-12-15 05:57:01.074481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.664 [2024-12-15 05:57:01.078586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.664 [2024-12-15 05:57:01.078634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.664 [2024-12-15 05:57:01.078647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.664 [2024-12-15 05:57:01.082633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.664 [2024-12-15 05:57:01.082680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.664 [2024-12-15 05:57:01.082692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.664 [2024-12-15 05:57:01.086811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.664 [2024-12-15 05:57:01.086861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.664 [2024-12-15 05:57:01.086874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.664 [2024-12-15 05:57:01.090755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.664 [2024-12-15 05:57:01.090803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.664 [2024-12-15 05:57:01.090815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.664 [2024-12-15 05:57:01.094734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.664 [2024-12-15 05:57:01.094781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.664 [2024-12-15 05:57:01.094792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.664 [2024-12-15 05:57:01.098735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.664 [2024-12-15 05:57:01.098782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.664 [2024-12-15 05:57:01.098794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.664 [2024-12-15 05:57:01.102843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.664 [2024-12-15 05:57:01.102900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.664 [2024-12-15 05:57:01.102913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.664 [2024-12-15 05:57:01.106789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.664 [2024-12-15 05:57:01.106836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.664 [2024-12-15 05:57:01.106849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.664 [2024-12-15 05:57:01.110797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.664 [2024-12-15 05:57:01.110844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.664 [2024-12-15 05:57:01.110857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.664 [2024-12-15 05:57:01.114825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.664 [2024-12-15 05:57:01.114873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.664 [2024-12-15 05:57:01.114910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.664 [2024-12-15 05:57:01.118786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.664 [2024-12-15 05:57:01.118834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.664 [2024-12-15 05:57:01.118846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.664 [2024-12-15 05:57:01.122701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.664 [2024-12-15 05:57:01.122750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.664 [2024-12-15 05:57:01.122762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.664 [2024-12-15 05:57:01.126756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.664 [2024-12-15 05:57:01.126803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.664 [2024-12-15 05:57:01.126816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.664 [2024-12-15 05:57:01.130855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.664 [2024-12-15 05:57:01.130912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.664 [2024-12-15 05:57:01.130924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.664 [2024-12-15 05:57:01.134902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.664 [2024-12-15 05:57:01.134941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.664 [2024-12-15 05:57:01.134970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.664 [2024-12-15 05:57:01.138931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.664 [2024-12-15 05:57:01.138977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.664 [2024-12-15 05:57:01.138989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.664 [2024-12-15 05:57:01.142900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.664 [2024-12-15 05:57:01.142945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.664 [2024-12-15 05:57:01.142957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.664 [2024-12-15 05:57:01.146945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.664 [2024-12-15 05:57:01.146992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.664 [2024-12-15 05:57:01.147005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.664 [2024-12-15 05:57:01.151090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.664 [2024-12-15 05:57:01.151138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.664 [2024-12-15 05:57:01.151175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.665 [2024-12-15 05:57:01.155082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.665 [2024-12-15 05:57:01.155129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.665 [2024-12-15 05:57:01.155142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.665 [2024-12-15 05:57:01.159032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.665 [2024-12-15 05:57:01.159079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.665 [2024-12-15 05:57:01.159106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.665 [2024-12-15 05:57:01.163438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.665 [2024-12-15 05:57:01.163473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.665 [2024-12-15 05:57:01.163516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.665 [2024-12-15 05:57:01.167903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.665 [2024-12-15 05:57:01.167961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.665 [2024-12-15 05:57:01.167975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.665 [2024-12-15 05:57:01.172469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.665 [2024-12-15 05:57:01.172517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.665 [2024-12-15 05:57:01.172530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.665 [2024-12-15 05:57:01.177010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.665 [2024-12-15 05:57:01.177044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.665 [2024-12-15 05:57:01.177057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.665 [2024-12-15 05:57:01.181440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.665 [2024-12-15 05:57:01.181487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.665 [2024-12-15 05:57:01.181499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.665 [2024-12-15 05:57:01.185975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.665 [2024-12-15 05:57:01.186009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.665 [2024-12-15 05:57:01.186023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.665 [2024-12-15 05:57:01.190420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.665 [2024-12-15 05:57:01.190468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.665 [2024-12-15 05:57:01.190480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.665 [2024-12-15 05:57:01.194719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.665 [2024-12-15 05:57:01.194767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.665 [2024-12-15 05:57:01.194797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.665 [2024-12-15 05:57:01.199003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.665 [2024-12-15 05:57:01.199050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.665 [2024-12-15 05:57:01.199062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.665 [2024-12-15 05:57:01.203112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.665 [2024-12-15 05:57:01.203182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.665 [2024-12-15 05:57:01.203196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.665 [2024-12-15 05:57:01.207278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.665 [2024-12-15 05:57:01.207311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.665 [2024-12-15 05:57:01.207324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.665 [2024-12-15 05:57:01.211317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.665 [2024-12-15 05:57:01.211367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.665 [2024-12-15 05:57:01.211379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.665 [2024-12-15 05:57:01.215495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.665 [2024-12-15 05:57:01.215531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.665 [2024-12-15 05:57:01.215545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.665 [2024-12-15 05:57:01.219555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.665 [2024-12-15 05:57:01.219602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.665 [2024-12-15 05:57:01.219614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.665 [2024-12-15 05:57:01.223673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.665 [2024-12-15 05:57:01.223721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.665 [2024-12-15 05:57:01.223733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.665 [2024-12-15 05:57:01.227837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.665 [2024-12-15 05:57:01.227909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.665 [2024-12-15 05:57:01.227923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.665 [2024-12-15 05:57:01.231930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.665 [2024-12-15 05:57:01.231988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.665 [2024-12-15 05:57:01.232000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.665 [2024-12-15 05:57:01.235982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.665 [2024-12-15 05:57:01.236029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.665 [2024-12-15 05:57:01.236041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.665 [2024-12-15 05:57:01.239987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.665 [2024-12-15 05:57:01.240034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.665 [2024-12-15 05:57:01.240046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.665 [2024-12-15 05:57:01.244055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.665 [2024-12-15 05:57:01.244103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.665 [2024-12-15 05:57:01.244115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.665 [2024-12-15 05:57:01.247996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.665 [2024-12-15 05:57:01.248043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.665 [2024-12-15 05:57:01.248055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.665 [2024-12-15 05:57:01.252021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.665 [2024-12-15 05:57:01.252068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.665 [2024-12-15 05:57:01.252080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.665 [2024-12-15 05:57:01.256114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.665 [2024-12-15 05:57:01.256161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.665 [2024-12-15 05:57:01.256174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.665 [2024-12-15 05:57:01.260159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.665 [2024-12-15 05:57:01.260206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.665 [2024-12-15 05:57:01.260218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.665 [2024-12-15 05:57:01.264185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.665 [2024-12-15 05:57:01.264231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.665 [2024-12-15 05:57:01.264243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.665 [2024-12-15 05:57:01.268149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.665 [2024-12-15 05:57:01.268196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.666 [2024-12-15 05:57:01.268208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.666 [2024-12-15 05:57:01.272143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.666 [2024-12-15 05:57:01.272189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.666 [2024-12-15 05:57:01.272202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.666 [2024-12-15 05:57:01.276169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.666 [2024-12-15 05:57:01.276216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.666 [2024-12-15 05:57:01.276228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.666 [2024-12-15 05:57:01.280221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.666 [2024-12-15 05:57:01.280268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.666 [2024-12-15 05:57:01.280280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.666 [2024-12-15 05:57:01.284179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.666 [2024-12-15 05:57:01.284226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.666 [2024-12-15 05:57:01.284239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.666 [2024-12-15 05:57:01.288241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.666 [2024-12-15 05:57:01.288288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.666 [2024-12-15 05:57:01.288300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.666 [2024-12-15 05:57:01.292290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.666 [2024-12-15 05:57:01.292337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.666 [2024-12-15 05:57:01.292350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.666 [2024-12-15 05:57:01.296511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.666 [2024-12-15 05:57:01.296561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.666 [2024-12-15 05:57:01.296574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.926 [2024-12-15 05:57:01.300847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.926 [2024-12-15 05:57:01.300908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.926 [2024-12-15 05:57:01.300921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.926 [2024-12-15 05:57:01.305018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.926 [2024-12-15 05:57:01.305065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.926 [2024-12-15 05:57:01.305078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.926 [2024-12-15 05:57:01.309139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.926 [2024-12-15 05:57:01.309186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.926 [2024-12-15 05:57:01.309198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.926 [2024-12-15 05:57:01.313212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.926 [2024-12-15 05:57:01.313260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.926 [2024-12-15 05:57:01.313272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.926 [2024-12-15 05:57:01.317167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.926 [2024-12-15 05:57:01.317214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.926 [2024-12-15 05:57:01.317226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.926 [2024-12-15 05:57:01.321207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.926 [2024-12-15 05:57:01.321254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.926 [2024-12-15 05:57:01.321282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.926 [2024-12-15 05:57:01.325271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.926 [2024-12-15 05:57:01.325319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.926 [2024-12-15 05:57:01.325331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.926 [2024-12-15 05:57:01.329413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.926 [2024-12-15 05:57:01.329472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.926 [2024-12-15 05:57:01.329486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.926 [2024-12-15 05:57:01.333417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.926 [2024-12-15 05:57:01.333465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.927 [2024-12-15 05:57:01.333477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.927 [2024-12-15 05:57:01.337398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.927 [2024-12-15 05:57:01.337446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.927 [2024-12-15 05:57:01.337458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.927 [2024-12-15 05:57:01.341453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.927 [2024-12-15 05:57:01.341501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.927 [2024-12-15 05:57:01.341514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.927 [2024-12-15 05:57:01.345435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.927 [2024-12-15 05:57:01.345482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.927 [2024-12-15 05:57:01.345494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.927 [2024-12-15 05:57:01.349400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.927 [2024-12-15 05:57:01.349448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.927 [2024-12-15 05:57:01.349460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.927 [2024-12-15 05:57:01.353443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.927 [2024-12-15 05:57:01.353490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.927 [2024-12-15 05:57:01.353503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.927 [2024-12-15 05:57:01.357428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.927 [2024-12-15 05:57:01.357476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.927 [2024-12-15 05:57:01.357488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.927 [2024-12-15 05:57:01.361504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.927 [2024-12-15 05:57:01.361552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.927 [2024-12-15 05:57:01.361564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.927 [2024-12-15 05:57:01.365553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.927 [2024-12-15 05:57:01.365600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.927 [2024-12-15 05:57:01.365612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.927 [2024-12-15 05:57:01.369627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.927 [2024-12-15 05:57:01.369675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.927 [2024-12-15 05:57:01.369687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.927 [2024-12-15 05:57:01.373650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.927 [2024-12-15 05:57:01.373698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.927 [2024-12-15 05:57:01.373727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.927 [2024-12-15 05:57:01.377804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.927 [2024-12-15 05:57:01.377851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.927 [2024-12-15 05:57:01.377863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.927 [2024-12-15 05:57:01.381947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.927 [2024-12-15 05:57:01.381979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.927 [2024-12-15 05:57:01.382009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.927 [2024-12-15 05:57:01.385947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.927 [2024-12-15 05:57:01.385994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.927 [2024-12-15 05:57:01.386007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.927 [2024-12-15 05:57:01.389960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.927 [2024-12-15 05:57:01.390009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.927 [2024-12-15 05:57:01.390022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.927 [2024-12-15 05:57:01.393996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.927 [2024-12-15 05:57:01.394047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.927 [2024-12-15 05:57:01.394066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.927 [2024-12-15 05:57:01.398648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.927 [2024-12-15 05:57:01.398695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.927 [2024-12-15 05:57:01.398708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.927 [2024-12-15 05:57:01.403246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.927 [2024-12-15 05:57:01.403281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.927 [2024-12-15 05:57:01.403295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.927 [2024-12-15 05:57:01.407827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.927 [2024-12-15 05:57:01.407862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.927 [2024-12-15 05:57:01.407887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.927 [2024-12-15 05:57:01.412380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.927 [2024-12-15 05:57:01.412444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.927 [2024-12-15 05:57:01.412472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.927 [2024-12-15 05:57:01.417077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.927 [2024-12-15 05:57:01.417156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.927 [2024-12-15 05:57:01.417168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.927 [2024-12-15 05:57:01.421533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.927 [2024-12-15 05:57:01.421580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.927 [2024-12-15 05:57:01.421592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.927 [2024-12-15 05:57:01.426152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.927 [2024-12-15 05:57:01.426229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.927 [2024-12-15 05:57:01.426241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.927 [2024-12-15 05:57:01.430563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.927 [2024-12-15 05:57:01.430610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.927 [2024-12-15 05:57:01.430622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.927 [2024-12-15 05:57:01.435051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.927 [2024-12-15 05:57:01.435099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.927 [2024-12-15 05:57:01.435123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.927 [2024-12-15 05:57:01.439549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.927 [2024-12-15 05:57:01.439609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.927 [2024-12-15 05:57:01.439621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.927 [2024-12-15 05:57:01.444207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.927 [2024-12-15 05:57:01.444254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.927 [2024-12-15 05:57:01.444266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.927 [2024-12-15 05:57:01.448622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.927 [2024-12-15 05:57:01.448672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.927 [2024-12-15 05:57:01.448684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.927 [2024-12-15 05:57:01.453253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.928 [2024-12-15 05:57:01.453301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.928 [2024-12-15 05:57:01.453313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.928 [2024-12-15 05:57:01.457644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.928 [2024-12-15 05:57:01.457692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.928 [2024-12-15 05:57:01.457710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.928 [2024-12-15 05:57:01.462171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.928 [2024-12-15 05:57:01.462219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.928 [2024-12-15 05:57:01.462232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.928 [2024-12-15 05:57:01.466566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.928 [2024-12-15 05:57:01.466614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.928 [2024-12-15 05:57:01.466626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.928 [2024-12-15 05:57:01.471141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.928 [2024-12-15 05:57:01.471200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.928 [2024-12-15 05:57:01.471214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.928 [2024-12-15 05:57:01.475624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.928 [2024-12-15 05:57:01.475672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.928 [2024-12-15 05:57:01.475684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.928 [2024-12-15 05:57:01.480213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.928 [2024-12-15 05:57:01.480261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.928 [2024-12-15 05:57:01.480274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.928 [2024-12-15 05:57:01.484593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.928 [2024-12-15 05:57:01.484641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.928 [2024-12-15 05:57:01.484655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.928 [2024-12-15 05:57:01.489062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.928 [2024-12-15 05:57:01.489126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.928 [2024-12-15 05:57:01.489139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.928 [2024-12-15 05:57:01.493341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.928 [2024-12-15 05:57:01.493389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.928 [2024-12-15 05:57:01.493402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.928 [2024-12-15 05:57:01.497541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.928 [2024-12-15 05:57:01.497588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.928 [2024-12-15 05:57:01.497600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.928 [2024-12-15 05:57:01.501700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.928 [2024-12-15 05:57:01.501750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.928 [2024-12-15 05:57:01.501762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.928 [2024-12-15 05:57:01.505816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.928 [2024-12-15 05:57:01.505864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.928 [2024-12-15 05:57:01.505876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.928 [2024-12-15 05:57:01.509755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.928 [2024-12-15 05:57:01.509802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.928 [2024-12-15 05:57:01.509814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.928 [2024-12-15 05:57:01.513828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.928 [2024-12-15 05:57:01.513876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.928 [2024-12-15 05:57:01.513914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.928 [2024-12-15 05:57:01.517846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.928 [2024-12-15 05:57:01.517920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.928 [2024-12-15 05:57:01.517933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.928 [2024-12-15 05:57:01.521906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.928 [2024-12-15 05:57:01.521964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.928 [2024-12-15 05:57:01.521976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.928 [2024-12-15 05:57:01.525942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.928 [2024-12-15 05:57:01.525990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.928 [2024-12-15 05:57:01.526002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.928 [2024-12-15 05:57:01.529928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.928 [2024-12-15 05:57:01.529975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.928 [2024-12-15 05:57:01.529987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.928 [2024-12-15 05:57:01.534008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.928 [2024-12-15 05:57:01.534055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.928 [2024-12-15 05:57:01.534067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.928 [2024-12-15 05:57:01.538174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.928 [2024-12-15 05:57:01.538224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.928 [2024-12-15 05:57:01.538236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.928 [2024-12-15 05:57:01.542246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.928 [2024-12-15 05:57:01.542295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.928 [2024-12-15 05:57:01.542322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.928 [2024-12-15 05:57:01.546346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.928 [2024-12-15 05:57:01.546394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.928 [2024-12-15 05:57:01.546405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:39.928 [2024-12-15 05:57:01.550424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.928 [2024-12-15 05:57:01.550473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.928 [2024-12-15 05:57:01.550484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:39.928 [2024-12-15 05:57:01.554462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.928 [2024-12-15 05:57:01.554510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.928 [2024-12-15 05:57:01.554522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:39.928 [2024-12-15 05:57:01.558469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.928 [2024-12-15 05:57:01.558517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.928 [2024-12-15 05:57:01.558530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:39.928 [2024-12-15 05:57:01.562735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:39.928 [2024-12-15 05:57:01.562784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:39.928 [2024-12-15 05:57:01.562797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.189 [2024-12-15 05:57:01.566938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.189 [2024-12-15 05:57:01.566986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.189 [2024-12-15 05:57:01.566998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.189 [2024-12-15 05:57:01.571229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.189 [2024-12-15 05:57:01.571263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.189 [2024-12-15 05:57:01.571276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.189 [2024-12-15 05:57:01.575403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.189 [2024-12-15 05:57:01.575453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.189 [2024-12-15 05:57:01.575496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.189 [2024-12-15 05:57:01.579558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.189 [2024-12-15 05:57:01.579606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.189 [2024-12-15 05:57:01.579618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.189 [2024-12-15 05:57:01.583710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.189 [2024-12-15 05:57:01.583757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.189 [2024-12-15 05:57:01.583785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.189 [2024-12-15 05:57:01.588071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.189 [2024-12-15 05:57:01.588119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.189 [2024-12-15 05:57:01.588131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.189 [2024-12-15 05:57:01.592122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.189 [2024-12-15 05:57:01.592170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.189 [2024-12-15 05:57:01.592197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.189 [2024-12-15 05:57:01.596215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.189 [2024-12-15 05:57:01.596263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.189 [2024-12-15 05:57:01.596275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.189 [2024-12-15 05:57:01.600120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.189 [2024-12-15 05:57:01.600182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.189 [2024-12-15 05:57:01.600194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.189 [2024-12-15 05:57:01.604129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.189 [2024-12-15 05:57:01.604191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.189 [2024-12-15 05:57:01.604203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.189 [2024-12-15 05:57:01.608206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.189 [2024-12-15 05:57:01.608254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.189 [2024-12-15 05:57:01.608267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.189 [2024-12-15 05:57:01.612295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.189 [2024-12-15 05:57:01.612344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.189 [2024-12-15 05:57:01.612356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.189 [2024-12-15 05:57:01.616403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.189 [2024-12-15 05:57:01.616451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.189 [2024-12-15 05:57:01.616463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.189 [2024-12-15 05:57:01.620593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.189 [2024-12-15 05:57:01.620640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.190 [2024-12-15 05:57:01.620653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.190 [2024-12-15 05:57:01.624633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.190 [2024-12-15 05:57:01.624683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.190 [2024-12-15 05:57:01.624695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.190 [2024-12-15 05:57:01.628717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.190 [2024-12-15 05:57:01.628765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.190 [2024-12-15 05:57:01.628777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.190 [2024-12-15 05:57:01.632863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.190 [2024-12-15 05:57:01.632920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.190 [2024-12-15 05:57:01.632932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.190 [2024-12-15 05:57:01.636996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.190 [2024-12-15 05:57:01.637044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.190 [2024-12-15 05:57:01.637057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.190 [2024-12-15 05:57:01.640966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.190 [2024-12-15 05:57:01.641022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.190 [2024-12-15 05:57:01.641036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.190 [2024-12-15 05:57:01.644995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.190 [2024-12-15 05:57:01.645042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.190 [2024-12-15 05:57:01.645054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.190 [2024-12-15 05:57:01.649047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.190 [2024-12-15 05:57:01.649093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.190 [2024-12-15 05:57:01.649105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.190 [2024-12-15 05:57:01.653000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.190 [2024-12-15 05:57:01.653046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.190 [2024-12-15 05:57:01.653058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.190 [2024-12-15 05:57:01.657066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.190 [2024-12-15 05:57:01.657113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.190 [2024-12-15 05:57:01.657125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.190 [2024-12-15 05:57:01.661195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.190 [2024-12-15 05:57:01.661242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.190 [2024-12-15 05:57:01.661255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.190 [2024-12-15 05:57:01.665117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.190 [2024-12-15 05:57:01.665164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.190 [2024-12-15 05:57:01.665176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.190 [2024-12-15 05:57:01.669074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.190 [2024-12-15 05:57:01.669121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.190 [2024-12-15 05:57:01.669149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.190 [2024-12-15 05:57:01.673291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.190 [2024-12-15 05:57:01.673340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.190 [2024-12-15 05:57:01.673353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.190 [2024-12-15 05:57:01.677488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.190 [2024-12-15 05:57:01.677535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.190 [2024-12-15 05:57:01.677547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.190 [2024-12-15 05:57:01.681552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.190 [2024-12-15 05:57:01.681599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.190 [2024-12-15 05:57:01.681611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.190 [2024-12-15 05:57:01.685653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.190 [2024-12-15 05:57:01.685701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.190 [2024-12-15 05:57:01.685714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.190 [2024-12-15 05:57:01.689644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.190 [2024-12-15 05:57:01.689691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.190 [2024-12-15 05:57:01.689703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.190 [2024-12-15 05:57:01.693665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.190 [2024-12-15 05:57:01.693712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.190 [2024-12-15 05:57:01.693724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.190 [2024-12-15 05:57:01.697583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.190 [2024-12-15 05:57:01.697630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.190 [2024-12-15 05:57:01.697642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.190 [2024-12-15 05:57:01.701879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.190 [2024-12-15 05:57:01.701951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.190 [2024-12-15 05:57:01.701963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.190 [2024-12-15 05:57:01.706081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.190 [2024-12-15 05:57:01.706142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.190 [2024-12-15 05:57:01.706154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.190 [2024-12-15 05:57:01.710169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.190 [2024-12-15 05:57:01.710215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.190 [2024-12-15 05:57:01.710226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.190 [2024-12-15 05:57:01.714160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.190 [2024-12-15 05:57:01.714208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.190 [2024-12-15 05:57:01.714221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.190 [2024-12-15 05:57:01.718043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.190 [2024-12-15 05:57:01.718090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.190 [2024-12-15 05:57:01.718102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.190 [2024-12-15 05:57:01.722123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.190 [2024-12-15 05:57:01.722171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.190 [2024-12-15 05:57:01.722183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.190 [2024-12-15 05:57:01.726147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.190 [2024-12-15 05:57:01.726194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.190 [2024-12-15 05:57:01.726206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.190 [2024-12-15 05:57:01.730404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.190 [2024-12-15 05:57:01.730452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.190 [2024-12-15 05:57:01.730464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.190 [2024-12-15 05:57:01.734937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.190 [2024-12-15 05:57:01.734986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.191 [2024-12-15 05:57:01.735000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.191 [2024-12-15 05:57:01.739231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.191 [2024-12-15 05:57:01.739265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.191 [2024-12-15 05:57:01.739278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.191 [2024-12-15 05:57:01.743450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.191 [2024-12-15 05:57:01.743527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.191 [2024-12-15 05:57:01.743539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.191 [2024-12-15 05:57:01.747718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.191 [2024-12-15 05:57:01.747766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.191 [2024-12-15 05:57:01.747789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.191 [2024-12-15 05:57:01.751922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.191 [2024-12-15 05:57:01.751980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.191 [2024-12-15 05:57:01.751993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.191 [2024-12-15 05:57:01.755804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.191 [2024-12-15 05:57:01.755850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.191 [2024-12-15 05:57:01.755862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.191 [2024-12-15 05:57:01.759802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.191 [2024-12-15 05:57:01.759849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.191 [2024-12-15 05:57:01.759860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.191 [2024-12-15 05:57:01.763737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.191 [2024-12-15 05:57:01.763784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.191 [2024-12-15 05:57:01.763796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.191 [2024-12-15 05:57:01.767764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.191 [2024-12-15 05:57:01.767811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.191 [2024-12-15 05:57:01.767822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.191 [2024-12-15 05:57:01.771797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.191 [2024-12-15 05:57:01.771844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.191 [2024-12-15 05:57:01.771856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.191 [2024-12-15 05:57:01.775694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.191 [2024-12-15 05:57:01.775741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.191 [2024-12-15 05:57:01.775753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.191 [2024-12-15 05:57:01.779657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.191 [2024-12-15 05:57:01.779703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.191 [2024-12-15 05:57:01.779714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.191 [2024-12-15 05:57:01.783922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.191 [2024-12-15 05:57:01.783967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.191 [2024-12-15 05:57:01.783979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.191 [2024-12-15 05:57:01.788363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.191 [2024-12-15 05:57:01.788411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.191 [2024-12-15 05:57:01.788424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.191 [2024-12-15 05:57:01.792904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.191 [2024-12-15 05:57:01.792981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.191 [2024-12-15 05:57:01.792996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.191 [2024-12-15 05:57:01.797358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.191 [2024-12-15 05:57:01.797391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.191 [2024-12-15 05:57:01.797403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.191 [2024-12-15 05:57:01.801729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.191 [2024-12-15 05:57:01.801777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.191 [2024-12-15 05:57:01.801790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.191 [2024-12-15 05:57:01.806140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.191 [2024-12-15 05:57:01.806173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.191 [2024-12-15 05:57:01.806186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.191 [2024-12-15 05:57:01.810326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.191 [2024-12-15 05:57:01.810373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.191 [2024-12-15 05:57:01.810385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.191 [2024-12-15 05:57:01.814441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.191 [2024-12-15 05:57:01.814489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.191 [2024-12-15 05:57:01.814500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.191 [2024-12-15 05:57:01.818487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.191 [2024-12-15 05:57:01.818535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.191 [2024-12-15 05:57:01.818547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.191 [2024-12-15 05:57:01.822768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.191 [2024-12-15 05:57:01.822817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.191 [2024-12-15 05:57:01.822830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.451 [2024-12-15 05:57:01.827054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.451 [2024-12-15 05:57:01.827102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.451 [2024-12-15 05:57:01.827114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.451 [2024-12-15 05:57:01.831266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.451 [2024-12-15 05:57:01.831301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.451 [2024-12-15 05:57:01.831314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.451 [2024-12-15 05:57:01.835673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.451 [2024-12-15 05:57:01.835721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.451 [2024-12-15 05:57:01.835733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.451 [2024-12-15 05:57:01.839896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.451 [2024-12-15 05:57:01.839952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.451 [2024-12-15 05:57:01.839965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.451 [2024-12-15 05:57:01.843874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.451 [2024-12-15 05:57:01.843930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.451 [2024-12-15 05:57:01.843942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.451 [2024-12-15 05:57:01.848073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.451 [2024-12-15 05:57:01.848122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.451 [2024-12-15 05:57:01.848150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.451 [2024-12-15 05:57:01.852273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.451 [2024-12-15 05:57:01.852321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.451 [2024-12-15 05:57:01.852333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.451 [2024-12-15 05:57:01.856610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.451 [2024-12-15 05:57:01.856658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.451 [2024-12-15 05:57:01.856670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.451 [2024-12-15 05:57:01.860991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.451 [2024-12-15 05:57:01.861026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.451 [2024-12-15 05:57:01.861040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.451 [2024-12-15 05:57:01.865655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.451 [2024-12-15 05:57:01.865704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.451 [2024-12-15 05:57:01.865718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.451 [2024-12-15 05:57:01.870068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.451 [2024-12-15 05:57:01.870103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.451 [2024-12-15 05:57:01.870116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.451 [2024-12-15 05:57:01.874488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.451 [2024-12-15 05:57:01.874535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.451 [2024-12-15 05:57:01.874547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.451 [2024-12-15 05:57:01.878735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.451 [2024-12-15 05:57:01.878782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.451 [2024-12-15 05:57:01.878793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.451 [2024-12-15 05:57:01.883280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.451 [2024-12-15 05:57:01.883314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.451 [2024-12-15 05:57:01.883328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.451 [2024-12-15 05:57:01.887682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.451 [2024-12-15 05:57:01.887728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.451 [2024-12-15 05:57:01.887740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:40.451 [2024-12-15 05:57:01.892029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.451 [2024-12-15 05:57:01.892079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.451 [2024-12-15 05:57:01.892092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:40.451 [2024-12-15 05:57:01.896550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.451 [2024-12-15 05:57:01.896599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.451 [2024-12-15 05:57:01.896611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.451 [2024-12-15 05:57:01.900771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c5b0) 00:16:40.451 [2024-12-15 05:57:01.900819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:40.452 [2024-12-15 05:57:01.900832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:40.452 00:16:40.452 Latency(us) 00:16:40.452 [2024-12-15T05:57:02.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.452 [2024-12-15T05:57:02.093Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:40.452 nvme0n1 : 2.00 7414.48 926.81 0.00 0.00 2154.78 1720.32 5123.72 00:16:40.452 [2024-12-15T05:57:02.093Z] =================================================================================================================== 00:16:40.452 [2024-12-15T05:57:02.093Z] Total : 7414.48 926.81 0.00 0.00 2154.78 1720.32 5123.72 00:16:40.452 0 00:16:40.452 05:57:01 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:40.452 05:57:01 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:40.452 05:57:01 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:40.452 05:57:01 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:40.452 | .driver_specific 00:16:40.452 | .nvme_error 00:16:40.452 | .status_code 00:16:40.452 | .command_transient_transport_error' 00:16:40.711 05:57:02 -- host/digest.sh@71 -- # (( 478 > 0 )) 00:16:40.711 05:57:02 -- host/digest.sh@73 -- # killprocess 83534 00:16:40.711 05:57:02 -- common/autotest_common.sh@936 -- # '[' -z 83534 ']' 00:16:40.711 05:57:02 -- common/autotest_common.sh@940 -- # kill -0 83534 00:16:40.711 05:57:02 -- common/autotest_common.sh@941 -- # uname 00:16:40.711 05:57:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:40.711 05:57:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83534 00:16:40.711 05:57:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:40.711 05:57:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:40.711 killing process with pid 83534 00:16:40.711 05:57:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83534' 00:16:40.711 05:57:02 -- common/autotest_common.sh@955 -- # kill 83534 00:16:40.711 Received shutdown signal, test time was about 2.000000 seconds 00:16:40.711 00:16:40.711 Latency(us) 00:16:40.711 [2024-12-15T05:57:02.352Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.711 [2024-12-15T05:57:02.352Z] =================================================================================================================== 00:16:40.711 [2024-12-15T05:57:02.352Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:40.711 05:57:02 -- common/autotest_common.sh@960 -- # wait 83534 00:16:40.971 05:57:02 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:16:40.971 05:57:02 -- host/digest.sh@54 -- # local rw bs qd 00:16:40.971 05:57:02 -- host/digest.sh@56 -- # rw=randwrite 00:16:40.971 05:57:02 -- host/digest.sh@56 -- # bs=4096 00:16:40.971 05:57:02 -- host/digest.sh@56 -- # qd=128 00:16:40.971 05:57:02 -- host/digest.sh@58 -- # bperfpid=83595 00:16:40.971 05:57:02 -- host/digest.sh@60 -- # waitforlisten 83595 /var/tmp/bperf.sock 00:16:40.971 05:57:02 -- common/autotest_common.sh@829 -- # '[' -z 83595 ']' 00:16:40.971 05:57:02 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:16:40.971 05:57:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:40.971 05:57:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:40.971 05:57:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:40.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:40.971 05:57:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:40.971 05:57:02 -- common/autotest_common.sh@10 -- # set +x 00:16:40.971 [2024-12-15 05:57:02.425417] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:40.971 [2024-12-15 05:57:02.425509] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83595 ] 00:16:40.971 [2024-12-15 05:57:02.553957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.971 [2024-12-15 05:57:02.586082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.906 05:57:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:41.906 05:57:03 -- common/autotest_common.sh@862 -- # return 0 00:16:41.906 05:57:03 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:41.906 05:57:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:42.165 05:57:03 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:42.165 05:57:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.165 05:57:03 -- common/autotest_common.sh@10 -- # set +x 00:16:42.165 05:57:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.165 05:57:03 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:42.165 05:57:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:42.424 nvme0n1 00:16:42.424 05:57:03 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:42.424 05:57:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.424 05:57:03 -- common/autotest_common.sh@10 -- # set +x 00:16:42.424 05:57:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.424 05:57:03 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:42.424 05:57:03 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:42.683 Running I/O for 2 seconds... 00:16:42.683 [2024-12-15 05:57:04.117571] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190ddc00 00:16:42.683 [2024-12-15 05:57:04.118988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.683 [2024-12-15 05:57:04.119031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.684 [2024-12-15 05:57:04.134727] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190fef90 00:16:42.684 [2024-12-15 05:57:04.136139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.684 [2024-12-15 05:57:04.136189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.684 [2024-12-15 05:57:04.150180] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190ff3c8 00:16:42.684 [2024-12-15 05:57:04.151612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.684 [2024-12-15 05:57:04.151658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:42.684 [2024-12-15 05:57:04.165206] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190feb58 00:16:42.684 [2024-12-15 05:57:04.166514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.684 [2024-12-15 05:57:04.166559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:42.684 [2024-12-15 05:57:04.180415] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190fe720 00:16:42.684 [2024-12-15 05:57:04.181727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.684 [2024-12-15 05:57:04.181773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:42.684 [2024-12-15 05:57:04.195454] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190fe2e8 00:16:42.684 [2024-12-15 05:57:04.196811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.684 [2024-12-15 05:57:04.196856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:42.684 [2024-12-15 05:57:04.210479] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190fdeb0 00:16:42.684 [2024-12-15 05:57:04.211817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.684 [2024-12-15 05:57:04.211862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:42.684 [2024-12-15 05:57:04.226427] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190fda78 00:16:42.684 [2024-12-15 05:57:04.227785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.684 [2024-12-15 05:57:04.227831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:42.684 [2024-12-15 05:57:04.243107] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190fd640 00:16:42.684 [2024-12-15 05:57:04.244468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.684 [2024-12-15 05:57:04.244511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:42.684 [2024-12-15 05:57:04.259662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190fd208 00:16:42.684 [2024-12-15 05:57:04.261102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.684 [2024-12-15 05:57:04.261180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:42.684 [2024-12-15 05:57:04.275545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190fcdd0 00:16:42.684 [2024-12-15 05:57:04.276804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.684 [2024-12-15 05:57:04.276849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:42.684 [2024-12-15 05:57:04.289960] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190fc998 00:16:42.684 [2024-12-15 05:57:04.291196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.684 [2024-12-15 05:57:04.291242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:42.684 [2024-12-15 05:57:04.304321] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190fc560 00:16:42.684 [2024-12-15 05:57:04.305506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.684 [2024-12-15 05:57:04.305551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:42.684 [2024-12-15 05:57:04.318814] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190fc128 00:16:42.684 [2024-12-15 05:57:04.320123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.684 [2024-12-15 05:57:04.320169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:42.943 [2024-12-15 05:57:04.333834] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190fbcf0 00:16:42.943 [2024-12-15 05:57:04.335039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.943 [2024-12-15 05:57:04.335085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:42.943 [2024-12-15 05:57:04.348585] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190fb8b8 00:16:42.943 [2024-12-15 05:57:04.349788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.944 [2024-12-15 05:57:04.349833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:42.944 [2024-12-15 05:57:04.364728] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190fb480 00:16:42.944 [2024-12-15 05:57:04.366110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.944 [2024-12-15 05:57:04.366170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:42.944 [2024-12-15 05:57:04.380433] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190fb048 00:16:42.944 [2024-12-15 05:57:04.381747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.944 [2024-12-15 05:57:04.381791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:42.944 [2024-12-15 05:57:04.395536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190fac10 00:16:42.944 [2024-12-15 05:57:04.396687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.944 [2024-12-15 05:57:04.396731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:42.944 [2024-12-15 05:57:04.410521] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190fa7d8 00:16:42.944 [2024-12-15 05:57:04.411757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.944 [2024-12-15 05:57:04.411802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:42.944 [2024-12-15 05:57:04.425364] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190fa3a0 00:16:42.944 [2024-12-15 05:57:04.426555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.944 [2024-12-15 05:57:04.426598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:42.944 [2024-12-15 05:57:04.440329] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f9f68 00:16:42.944 [2024-12-15 05:57:04.441439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.944 [2024-12-15 05:57:04.441483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:42.944 [2024-12-15 05:57:04.455330] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f9b30 00:16:42.944 [2024-12-15 05:57:04.456486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.944 [2024-12-15 05:57:04.456532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:42.944 [2024-12-15 05:57:04.471438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f96f8 00:16:42.944 [2024-12-15 05:57:04.472749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.944 [2024-12-15 05:57:04.472798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:42.944 [2024-12-15 05:57:04.488347] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f92c0 00:16:42.944 [2024-12-15 05:57:04.489506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.944 [2024-12-15 05:57:04.489547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:42.944 [2024-12-15 05:57:04.505014] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f8e88 00:16:42.944 [2024-12-15 05:57:04.506192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.944 [2024-12-15 05:57:04.506237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:42.944 [2024-12-15 05:57:04.521510] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f8a50 00:16:42.944 [2024-12-15 05:57:04.522677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.944 [2024-12-15 05:57:04.522757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:42.944 [2024-12-15 05:57:04.538229] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f8618 00:16:42.944 [2024-12-15 05:57:04.539338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.944 [2024-12-15 05:57:04.539376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:42.944 [2024-12-15 05:57:04.553911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f81e0 00:16:42.944 [2024-12-15 05:57:04.555127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.944 [2024-12-15 05:57:04.555199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:42.944 [2024-12-15 05:57:04.568706] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f7da8 00:16:42.944 [2024-12-15 05:57:04.569780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.944 [2024-12-15 05:57:04.569813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:43.203 [2024-12-15 05:57:04.583863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f7970 00:16:43.203 [2024-12-15 05:57:04.585035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.203 [2024-12-15 05:57:04.585083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:43.203 [2024-12-15 05:57:04.598574] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f7538 00:16:43.203 [2024-12-15 05:57:04.599686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.203 [2024-12-15 05:57:04.599860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:43.203 [2024-12-15 05:57:04.612918] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f7100 00:16:43.203 [2024-12-15 05:57:04.614111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.203 [2024-12-15 05:57:04.614294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:43.203 [2024-12-15 05:57:04.627546] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f6cc8 00:16:43.203 [2024-12-15 05:57:04.628677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.203 [2024-12-15 05:57:04.628861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:43.203 [2024-12-15 05:57:04.642121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f6890 00:16:43.203 [2024-12-15 05:57:04.643354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.203 [2024-12-15 05:57:04.643559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:43.203 [2024-12-15 05:57:04.656626] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f6458 00:16:43.203 [2024-12-15 05:57:04.657794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.203 [2024-12-15 05:57:04.658014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:43.203 [2024-12-15 05:57:04.670958] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f6020 00:16:43.203 [2024-12-15 05:57:04.672129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.204 [2024-12-15 05:57:04.672330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:43.204 [2024-12-15 05:57:04.685354] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f5be8 00:16:43.204 [2024-12-15 05:57:04.686500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.204 [2024-12-15 05:57:04.686682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:43.204 [2024-12-15 05:57:04.700112] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f57b0 00:16:43.204 [2024-12-15 05:57:04.701195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.204 [2024-12-15 05:57:04.701376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:43.204 [2024-12-15 05:57:04.714568] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f5378 00:16:43.204 [2024-12-15 05:57:04.715740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.204 [2024-12-15 05:57:04.715773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:43.204 [2024-12-15 05:57:04.729011] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f4f40 00:16:43.204 [2024-12-15 05:57:04.729922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.204 [2024-12-15 05:57:04.730152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:43.204 [2024-12-15 05:57:04.743447] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f4b08 00:16:43.204 [2024-12-15 05:57:04.744412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.204 [2024-12-15 05:57:04.744444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:43.204 [2024-12-15 05:57:04.757808] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f46d0 00:16:43.204 [2024-12-15 05:57:04.758774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.204 [2024-12-15 05:57:04.758805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:43.204 [2024-12-15 05:57:04.772194] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f4298 00:16:43.204 [2024-12-15 05:57:04.773076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.204 [2024-12-15 05:57:04.773124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:43.204 [2024-12-15 05:57:04.786838] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f3e60 00:16:43.204 [2024-12-15 05:57:04.787878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.204 [2024-12-15 05:57:04.788060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:43.204 [2024-12-15 05:57:04.802146] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f3a28 00:16:43.204 [2024-12-15 05:57:04.803074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.204 [2024-12-15 05:57:04.803278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:43.204 [2024-12-15 05:57:04.816771] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f35f0 00:16:43.204 [2024-12-15 05:57:04.817862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.204 [2024-12-15 05:57:04.818055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:43.204 [2024-12-15 05:57:04.831994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f31b8 00:16:43.204 [2024-12-15 05:57:04.833185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.204 [2024-12-15 05:57:04.833369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:43.463 [2024-12-15 05:57:04.848009] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f2d80 00:16:43.463 [2024-12-15 05:57:04.849071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.463 [2024-12-15 05:57:04.849266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:43.463 [2024-12-15 05:57:04.862518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f2948 00:16:43.463 [2024-12-15 05:57:04.863604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.463 [2024-12-15 05:57:04.863784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:43.463 [2024-12-15 05:57:04.877371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f2510 00:16:43.463 [2024-12-15 05:57:04.878385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.463 [2024-12-15 05:57:04.878567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:43.463 [2024-12-15 05:57:04.891836] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f20d8 00:16:43.463 [2024-12-15 05:57:04.892907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.463 [2024-12-15 05:57:04.893122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:43.463 [2024-12-15 05:57:04.906310] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f1ca0 00:16:43.463 [2024-12-15 05:57:04.907308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.463 [2024-12-15 05:57:04.907502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:43.463 [2024-12-15 05:57:04.922061] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f1868 00:16:43.463 [2024-12-15 05:57:04.923053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.463 [2024-12-15 05:57:04.923260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:43.463 [2024-12-15 05:57:04.938226] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f1430 00:16:43.463 [2024-12-15 05:57:04.939289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.463 [2024-12-15 05:57:04.939328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:43.463 [2024-12-15 05:57:04.953636] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f0ff8 00:16:43.463 [2024-12-15 05:57:04.954420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.463 [2024-12-15 05:57:04.954455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:43.463 [2024-12-15 05:57:04.968092] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f0bc0 00:16:43.463 [2024-12-15 05:57:04.968857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.463 [2024-12-15 05:57:04.968953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:43.463 [2024-12-15 05:57:04.982365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f0788 00:16:43.463 [2024-12-15 05:57:04.983094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.463 [2024-12-15 05:57:04.983285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:43.463 [2024-12-15 05:57:04.996598] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190f0350 00:16:43.463 [2024-12-15 05:57:04.997327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.463 [2024-12-15 05:57:04.997481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:43.463 [2024-12-15 05:57:05.011102] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190eff18 00:16:43.463 [2024-12-15 05:57:05.012137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.463 [2024-12-15 05:57:05.012167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:43.463 [2024-12-15 05:57:05.025917] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190efae0 00:16:43.463 [2024-12-15 05:57:05.026604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.464 [2024-12-15 05:57:05.026639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:43.464 [2024-12-15 05:57:05.040303] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190ef6a8 00:16:43.464 [2024-12-15 05:57:05.040995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.464 [2024-12-15 05:57:05.041030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:43.464 [2024-12-15 05:57:05.054462] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190ef270 00:16:43.464 [2024-12-15 05:57:05.055230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.464 [2024-12-15 05:57:05.055401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:43.464 [2024-12-15 05:57:05.069305] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190eee38 00:16:43.464 [2024-12-15 05:57:05.069980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.464 [2024-12-15 05:57:05.070015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:43.464 [2024-12-15 05:57:05.083591] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190eea00 00:16:43.464 [2024-12-15 05:57:05.084294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.464 [2024-12-15 05:57:05.084331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:43.464 [2024-12-15 05:57:05.099006] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190ee5c8 00:16:43.464 [2024-12-15 05:57:05.099752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.464 [2024-12-15 05:57:05.099979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:43.723 [2024-12-15 05:57:05.113921] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190ee190 00:16:43.723 [2024-12-15 05:57:05.114571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.723 [2024-12-15 05:57:05.114606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:43.723 [2024-12-15 05:57:05.128245] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190edd58 00:16:43.723 [2024-12-15 05:57:05.128844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.723 [2024-12-15 05:57:05.128893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:43.723 [2024-12-15 05:57:05.142921] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190ed920 00:16:43.723 [2024-12-15 05:57:05.143634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.723 [2024-12-15 05:57:05.143794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:43.723 [2024-12-15 05:57:05.157469] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190ed4e8 00:16:43.723 [2024-12-15 05:57:05.158085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.723 [2024-12-15 05:57:05.158225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:43.723 [2024-12-15 05:57:05.171862] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190ed0b0 00:16:43.723 [2024-12-15 05:57:05.172584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.723 [2024-12-15 05:57:05.172618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:43.723 [2024-12-15 05:57:05.186342] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190ecc78 00:16:43.723 [2024-12-15 05:57:05.187137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.723 [2024-12-15 05:57:05.187332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:43.723 [2024-12-15 05:57:05.200896] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190ec840 00:16:43.723 [2024-12-15 05:57:05.201488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.723 [2024-12-15 05:57:05.201629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:43.723 [2024-12-15 05:57:05.216551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190ec408 00:16:43.723 [2024-12-15 05:57:05.217208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.723 [2024-12-15 05:57:05.217246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:43.723 [2024-12-15 05:57:05.232119] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190ebfd0 00:16:43.723 [2024-12-15 05:57:05.232739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.723 [2024-12-15 05:57:05.232776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:43.723 [2024-12-15 05:57:05.249189] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190ebb98 00:16:43.723 [2024-12-15 05:57:05.249835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.723 [2024-12-15 05:57:05.249911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:43.723 [2024-12-15 05:57:05.266108] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190eb760 00:16:43.723 [2024-12-15 05:57:05.266701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.723 [2024-12-15 05:57:05.266739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:43.723 [2024-12-15 05:57:05.282394] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190eb328 00:16:43.723 [2024-12-15 05:57:05.282992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.723 [2024-12-15 05:57:05.283187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:43.723 [2024-12-15 05:57:05.298144] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190eaef0 00:16:43.723 [2024-12-15 05:57:05.298741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.723 [2024-12-15 05:57:05.298964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:43.723 [2024-12-15 05:57:05.313192] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190eaab8 00:16:43.723 [2024-12-15 05:57:05.313721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.723 [2024-12-15 05:57:05.313757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:43.723 [2024-12-15 05:57:05.328124] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190ea680 00:16:43.723 [2024-12-15 05:57:05.328641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.723 [2024-12-15 05:57:05.328683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:43.723 [2024-12-15 05:57:05.343015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190ea248 00:16:43.723 [2024-12-15 05:57:05.343610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.723 [2024-12-15 05:57:05.343646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:43.723 [2024-12-15 05:57:05.359039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e9e10 00:16:43.723 [2024-12-15 05:57:05.359772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.723 [2024-12-15 05:57:05.359802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:43.983 [2024-12-15 05:57:05.374349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e99d8 00:16:43.983 [2024-12-15 05:57:05.375101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.983 [2024-12-15 05:57:05.375129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:43.983 [2024-12-15 05:57:05.388971] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e95a0 00:16:43.983 [2024-12-15 05:57:05.389576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.983 [2024-12-15 05:57:05.389606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:43.983 [2024-12-15 05:57:05.403680] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e9168 00:16:43.983 [2024-12-15 05:57:05.404334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.983 [2024-12-15 05:57:05.404393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:43.983 [2024-12-15 05:57:05.418044] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e8d30 00:16:43.983 [2024-12-15 05:57:05.418628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.983 [2024-12-15 05:57:05.418658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:43.983 [2024-12-15 05:57:05.433401] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e88f8 00:16:43.983 [2024-12-15 05:57:05.433874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.983 [2024-12-15 05:57:05.433922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:43.983 [2024-12-15 05:57:05.448552] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e84c0 00:16:43.983 [2024-12-15 05:57:05.449014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.983 [2024-12-15 05:57:05.449069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:43.983 [2024-12-15 05:57:05.462596] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e8088 00:16:43.983 [2024-12-15 05:57:05.463021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.983 [2024-12-15 05:57:05.463047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:43.983 [2024-12-15 05:57:05.476533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e7c50 00:16:43.983 [2024-12-15 05:57:05.476929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.983 [2024-12-15 05:57:05.476953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:43.983 [2024-12-15 05:57:05.490337] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e7818 00:16:43.983 [2024-12-15 05:57:05.490734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.983 [2024-12-15 05:57:05.490759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:43.983 [2024-12-15 05:57:05.504795] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e73e0 00:16:43.983 [2024-12-15 05:57:05.505309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.983 [2024-12-15 05:57:05.505337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:43.983 [2024-12-15 05:57:05.520945] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e6fa8 00:16:43.983 [2024-12-15 05:57:05.521367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.983 [2024-12-15 05:57:05.521391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:43.983 [2024-12-15 05:57:05.536881] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e6b70 00:16:43.983 [2024-12-15 05:57:05.537349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.983 [2024-12-15 05:57:05.537379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:43.983 [2024-12-15 05:57:05.552192] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e6738 00:16:43.983 [2024-12-15 05:57:05.552545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.983 [2024-12-15 05:57:05.552570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:43.983 [2024-12-15 05:57:05.566464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e6300 00:16:43.983 [2024-12-15 05:57:05.566811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.983 [2024-12-15 05:57:05.566835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:43.983 [2024-12-15 05:57:05.580907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e5ec8 00:16:43.983 [2024-12-15 05:57:05.581241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.983 [2024-12-15 05:57:05.581282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:43.983 [2024-12-15 05:57:05.595977] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e5a90 00:16:43.983 [2024-12-15 05:57:05.596315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.983 [2024-12-15 05:57:05.596340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:43.983 [2024-12-15 05:57:05.611952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e5658 00:16:43.983 [2024-12-15 05:57:05.612310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:43.983 [2024-12-15 05:57:05.612337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:44.243 [2024-12-15 05:57:05.627986] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e5220 00:16:44.243 [2024-12-15 05:57:05.628327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.243 [2024-12-15 05:57:05.628352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:44.243 [2024-12-15 05:57:05.643455] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e4de8 00:16:44.243 [2024-12-15 05:57:05.643870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.243 [2024-12-15 05:57:05.643906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:44.243 [2024-12-15 05:57:05.660282] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e49b0 00:16:44.243 [2024-12-15 05:57:05.660570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.243 [2024-12-15 05:57:05.660601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:44.243 [2024-12-15 05:57:05.676677] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e4578 00:16:44.243 [2024-12-15 05:57:05.677048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.243 [2024-12-15 05:57:05.677108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:44.243 [2024-12-15 05:57:05.693081] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e4140 00:16:44.243 [2024-12-15 05:57:05.693482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.243 [2024-12-15 05:57:05.693510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:44.243 [2024-12-15 05:57:05.709542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e3d08 00:16:44.243 [2024-12-15 05:57:05.709854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.243 [2024-12-15 05:57:05.709888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:44.243 [2024-12-15 05:57:05.724696] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e38d0 00:16:44.243 [2024-12-15 05:57:05.725005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.243 [2024-12-15 05:57:05.725029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:44.243 [2024-12-15 05:57:05.738999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e3498 00:16:44.243 [2024-12-15 05:57:05.739455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.243 [2024-12-15 05:57:05.739508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:44.243 [2024-12-15 05:57:05.753442] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e3060 00:16:44.243 [2024-12-15 05:57:05.753678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.243 [2024-12-15 05:57:05.753698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:44.243 [2024-12-15 05:57:05.767543] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e2c28 00:16:44.243 [2024-12-15 05:57:05.767784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.243 [2024-12-15 05:57:05.767803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:44.243 [2024-12-15 05:57:05.781631] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e27f0 00:16:44.243 [2024-12-15 05:57:05.782040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.243 [2024-12-15 05:57:05.782066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:44.243 [2024-12-15 05:57:05.796069] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e23b8 00:16:44.243 [2024-12-15 05:57:05.796280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.243 [2024-12-15 05:57:05.796303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:44.243 [2024-12-15 05:57:05.810072] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e1f80 00:16:44.243 [2024-12-15 05:57:05.810274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.243 [2024-12-15 05:57:05.810294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:44.243 [2024-12-15 05:57:05.824102] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e1b48 00:16:44.243 [2024-12-15 05:57:05.824291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.243 [2024-12-15 05:57:05.824311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:44.243 [2024-12-15 05:57:05.838124] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e1710 00:16:44.243 [2024-12-15 05:57:05.838308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.243 [2024-12-15 05:57:05.838328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:44.243 [2024-12-15 05:57:05.852263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e12d8 00:16:44.243 [2024-12-15 05:57:05.852437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.243 [2024-12-15 05:57:05.852457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:44.243 [2024-12-15 05:57:05.866518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e0ea0 00:16:44.243 [2024-12-15 05:57:05.866760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.243 [2024-12-15 05:57:05.866780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:44.510 [2024-12-15 05:57:05.882001] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e0a68 00:16:44.510 [2024-12-15 05:57:05.882172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.510 [2024-12-15 05:57:05.882193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:44.510 [2024-12-15 05:57:05.896608] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e0630 00:16:44.510 [2024-12-15 05:57:05.896776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.510 [2024-12-15 05:57:05.896796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:44.510 [2024-12-15 05:57:05.910826] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190e01f8 00:16:44.510 [2024-12-15 05:57:05.911017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.510 [2024-12-15 05:57:05.911038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:44.510 [2024-12-15 05:57:05.925431] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190dfdc0 00:16:44.510 [2024-12-15 05:57:05.925562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.510 [2024-12-15 05:57:05.925581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:44.510 [2024-12-15 05:57:05.941329] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190df988 00:16:44.510 [2024-12-15 05:57:05.941478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.510 [2024-12-15 05:57:05.941500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:44.510 [2024-12-15 05:57:05.957651] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190df550 00:16:44.510 [2024-12-15 05:57:05.957793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.510 [2024-12-15 05:57:05.957816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:44.511 [2024-12-15 05:57:05.973687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190df118 00:16:44.511 [2024-12-15 05:57:05.973802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.511 [2024-12-15 05:57:05.973824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:44.511 [2024-12-15 05:57:05.988305] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190dece0 00:16:44.511 [2024-12-15 05:57:05.988405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.511 [2024-12-15 05:57:05.988425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:44.511 [2024-12-15 05:57:06.002399] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190de8a8 00:16:44.511 [2024-12-15 05:57:06.002491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.511 [2024-12-15 05:57:06.002511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:44.511 [2024-12-15 05:57:06.016576] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190de038 00:16:44.511 [2024-12-15 05:57:06.016655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.511 [2024-12-15 05:57:06.016676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:44.511 [2024-12-15 05:57:06.036934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190de038 00:16:44.511 [2024-12-15 05:57:06.038283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.511 [2024-12-15 05:57:06.038317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:44.511 [2024-12-15 05:57:06.051545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190de470 00:16:44.511 [2024-12-15 05:57:06.052969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.511 [2024-12-15 05:57:06.052996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.511 [2024-12-15 05:57:06.065860] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190de8a8 00:16:44.511 [2024-12-15 05:57:06.067260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.511 [2024-12-15 05:57:06.067294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:44.511 [2024-12-15 05:57:06.080273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190dece0 00:16:44.511 [2024-12-15 05:57:06.081504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.511 [2024-12-15 05:57:06.081535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:44.511 [2024-12-15 05:57:06.094531] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa91160) with pdu=0x2000190df118 00:16:44.511 [2024-12-15 05:57:06.095836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.511 [2024-12-15 05:57:06.095867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:44.511 00:16:44.511 Latency(us) 00:16:44.511 [2024-12-15T05:57:06.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.511 [2024-12-15T05:57:06.152Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:44.511 nvme0n1 : 2.01 16844.10 65.80 0.00 0.00 7593.22 6732.33 22282.24 00:16:44.511 [2024-12-15T05:57:06.152Z] =================================================================================================================== 00:16:44.511 [2024-12-15T05:57:06.152Z] Total : 16844.10 65.80 0.00 0.00 7593.22 6732.33 22282.24 00:16:44.511 0 00:16:44.511 05:57:06 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:44.511 05:57:06 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:44.511 05:57:06 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:44.511 | .driver_specific 00:16:44.511 | .nvme_error 00:16:44.511 | .status_code 00:16:44.511 | .command_transient_transport_error' 00:16:44.511 05:57:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:44.774 05:57:06 -- host/digest.sh@71 -- # (( 132 > 0 )) 00:16:44.774 05:57:06 -- host/digest.sh@73 -- # killprocess 83595 00:16:44.774 05:57:06 -- common/autotest_common.sh@936 -- # '[' -z 83595 ']' 00:16:44.774 05:57:06 -- common/autotest_common.sh@940 -- # kill -0 83595 00:16:44.774 05:57:06 -- common/autotest_common.sh@941 -- # uname 00:16:45.033 05:57:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:45.033 05:57:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83595 00:16:45.033 killing process with pid 83595 00:16:45.033 Received shutdown signal, test time was about 2.000000 seconds 00:16:45.033 00:16:45.033 Latency(us) 00:16:45.033 [2024-12-15T05:57:06.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.033 [2024-12-15T05:57:06.674Z] =================================================================================================================== 00:16:45.033 [2024-12-15T05:57:06.674Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:45.033 05:57:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:45.033 05:57:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:45.033 05:57:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83595' 00:16:45.033 05:57:06 -- common/autotest_common.sh@955 -- # kill 83595 00:16:45.033 05:57:06 -- common/autotest_common.sh@960 -- # wait 83595 00:16:45.033 05:57:06 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:16:45.033 05:57:06 -- host/digest.sh@54 -- # local rw bs qd 00:16:45.033 05:57:06 -- host/digest.sh@56 -- # rw=randwrite 00:16:45.033 05:57:06 -- host/digest.sh@56 -- # bs=131072 00:16:45.033 05:57:06 -- host/digest.sh@56 -- # qd=16 00:16:45.033 05:57:06 -- host/digest.sh@58 -- # bperfpid=83655 00:16:45.033 05:57:06 -- host/digest.sh@60 -- # waitforlisten 83655 /var/tmp/bperf.sock 00:16:45.033 05:57:06 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:16:45.033 05:57:06 -- common/autotest_common.sh@829 -- # '[' -z 83655 ']' 00:16:45.033 05:57:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:45.033 05:57:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.033 05:57:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:45.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:45.033 05:57:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.033 05:57:06 -- common/autotest_common.sh@10 -- # set +x 00:16:45.033 [2024-12-15 05:57:06.626129] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:45.033 [2024-12-15 05:57:06.626376] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83655 ] 00:16:45.033 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:45.033 Zero copy mechanism will not be used. 00:16:45.292 [2024-12-15 05:57:06.760616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.292 [2024-12-15 05:57:06.793593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.292 05:57:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:45.292 05:57:06 -- common/autotest_common.sh@862 -- # return 0 00:16:45.292 05:57:06 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:45.293 05:57:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:45.552 05:57:07 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:45.552 05:57:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.552 05:57:07 -- common/autotest_common.sh@10 -- # set +x 00:16:45.552 05:57:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.552 05:57:07 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:45.552 05:57:07 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:46.120 nvme0n1 00:16:46.120 05:57:07 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:46.120 05:57:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.120 05:57:07 -- common/autotest_common.sh@10 -- # set +x 00:16:46.120 05:57:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.120 05:57:07 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:46.120 05:57:07 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:46.120 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:46.120 Zero copy mechanism will not be used. 00:16:46.120 Running I/O for 2 seconds... 00:16:46.120 [2024-12-15 05:57:07.614433] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.120 [2024-12-15 05:57:07.614960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.120 [2024-12-15 05:57:07.614992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.120 [2024-12-15 05:57:07.620121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.120 [2024-12-15 05:57:07.620429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.120 [2024-12-15 05:57:07.620458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.120 [2024-12-15 05:57:07.625446] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.120 [2024-12-15 05:57:07.625801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.120 [2024-12-15 05:57:07.625831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.120 [2024-12-15 05:57:07.630538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.120 [2024-12-15 05:57:07.631059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.120 [2024-12-15 05:57:07.631092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.120 [2024-12-15 05:57:07.636097] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.120 [2024-12-15 05:57:07.636396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.120 [2024-12-15 05:57:07.636423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.120 [2024-12-15 05:57:07.641138] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.120 [2024-12-15 05:57:07.641437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.120 [2024-12-15 05:57:07.641465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.120 [2024-12-15 05:57:07.645695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.120 [2024-12-15 05:57:07.646018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.120 [2024-12-15 05:57:07.646045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.120 [2024-12-15 05:57:07.650360] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.120 [2024-12-15 05:57:07.650647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.120 [2024-12-15 05:57:07.650673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.120 [2024-12-15 05:57:07.655195] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.120 [2024-12-15 05:57:07.655521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.120 [2024-12-15 05:57:07.655548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.120 [2024-12-15 05:57:07.659856] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.120 [2024-12-15 05:57:07.660195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.120 [2024-12-15 05:57:07.660234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.120 [2024-12-15 05:57:07.664480] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.120 [2024-12-15 05:57:07.664770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.120 [2024-12-15 05:57:07.664796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.120 [2024-12-15 05:57:07.669314] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.120 [2024-12-15 05:57:07.669601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.120 [2024-12-15 05:57:07.669628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.120 [2024-12-15 05:57:07.674056] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.120 [2024-12-15 05:57:07.674329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.121 [2024-12-15 05:57:07.674356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.121 [2024-12-15 05:57:07.678668] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.121 [2024-12-15 05:57:07.679129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.121 [2024-12-15 05:57:07.679185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.121 [2024-12-15 05:57:07.683649] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.121 [2024-12-15 05:57:07.683954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.121 [2024-12-15 05:57:07.683991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.121 [2024-12-15 05:57:07.688419] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.121 [2024-12-15 05:57:07.688721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.121 [2024-12-15 05:57:07.688747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.121 [2024-12-15 05:57:07.693316] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.121 [2024-12-15 05:57:07.693615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.121 [2024-12-15 05:57:07.693642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.121 [2024-12-15 05:57:07.698110] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.121 [2024-12-15 05:57:07.698410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.121 [2024-12-15 05:57:07.698437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.121 [2024-12-15 05:57:07.702929] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.121 [2024-12-15 05:57:07.703316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.121 [2024-12-15 05:57:07.703356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.121 [2024-12-15 05:57:07.708058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.121 [2024-12-15 05:57:07.708368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.121 [2024-12-15 05:57:07.708395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.121 [2024-12-15 05:57:07.712793] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.121 [2024-12-15 05:57:07.713158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.121 [2024-12-15 05:57:07.713190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.121 [2024-12-15 05:57:07.717563] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.121 [2024-12-15 05:57:07.718036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.121 [2024-12-15 05:57:07.718068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.121 [2024-12-15 05:57:07.722369] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.121 [2024-12-15 05:57:07.722653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.121 [2024-12-15 05:57:07.722679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.121 [2024-12-15 05:57:07.727105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.121 [2024-12-15 05:57:07.727459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.121 [2024-12-15 05:57:07.727516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.121 [2024-12-15 05:57:07.732069] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.121 [2024-12-15 05:57:07.732420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.121 [2024-12-15 05:57:07.732447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.121 [2024-12-15 05:57:07.737165] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.121 [2024-12-15 05:57:07.737464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.121 [2024-12-15 05:57:07.737491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.121 [2024-12-15 05:57:07.741857] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.121 [2024-12-15 05:57:07.742219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.121 [2024-12-15 05:57:07.742252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.121 [2024-12-15 05:57:07.746545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.121 [2024-12-15 05:57:07.746824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.121 [2024-12-15 05:57:07.746850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.121 [2024-12-15 05:57:07.751263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.121 [2024-12-15 05:57:07.751603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.121 [2024-12-15 05:57:07.751629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.121 [2024-12-15 05:57:07.755984] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.121 [2024-12-15 05:57:07.756334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.121 [2024-12-15 05:57:07.756370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.382 [2024-12-15 05:57:07.761006] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.382 [2024-12-15 05:57:07.761318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.382 [2024-12-15 05:57:07.761346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.382 [2024-12-15 05:57:07.765774] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.382 [2024-12-15 05:57:07.766106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.382 [2024-12-15 05:57:07.766137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.382 [2024-12-15 05:57:07.770462] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.382 [2024-12-15 05:57:07.770740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.382 [2024-12-15 05:57:07.770766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.382 [2024-12-15 05:57:07.775172] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.382 [2024-12-15 05:57:07.775532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.382 [2024-12-15 05:57:07.775574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.382 [2024-12-15 05:57:07.779951] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.382 [2024-12-15 05:57:07.780242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.382 [2024-12-15 05:57:07.780268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.382 [2024-12-15 05:57:07.784546] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.382 [2024-12-15 05:57:07.785041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.382 [2024-12-15 05:57:07.785073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.382 [2024-12-15 05:57:07.789469] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.382 [2024-12-15 05:57:07.789749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.382 [2024-12-15 05:57:07.789775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.382 [2024-12-15 05:57:07.794065] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.382 [2024-12-15 05:57:07.794349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.382 [2024-12-15 05:57:07.794375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.382 [2024-12-15 05:57:07.798622] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.382 [2024-12-15 05:57:07.798915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.382 [2024-12-15 05:57:07.798940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.382 [2024-12-15 05:57:07.803255] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.382 [2024-12-15 05:57:07.803598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.382 [2024-12-15 05:57:07.803624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.382 [2024-12-15 05:57:07.807902] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.382 [2024-12-15 05:57:07.808220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.382 [2024-12-15 05:57:07.808246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.382 [2024-12-15 05:57:07.812511] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.382 [2024-12-15 05:57:07.812993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.382 [2024-12-15 05:57:07.813026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.382 [2024-12-15 05:57:07.817470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.382 [2024-12-15 05:57:07.817779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.382 [2024-12-15 05:57:07.817806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.382 [2024-12-15 05:57:07.822523] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.382 [2024-12-15 05:57:07.822840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.382 [2024-12-15 05:57:07.822867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.383 [2024-12-15 05:57:07.827882] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.383 [2024-12-15 05:57:07.828220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.383 [2024-12-15 05:57:07.828252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.383 [2024-12-15 05:57:07.833107] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.383 [2024-12-15 05:57:07.833416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.383 [2024-12-15 05:57:07.833443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.383 [2024-12-15 05:57:07.838191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.383 [2024-12-15 05:57:07.838477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.383 [2024-12-15 05:57:07.838503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.383 [2024-12-15 05:57:07.843348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.383 [2024-12-15 05:57:07.843707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.383 [2024-12-15 05:57:07.843735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.383 [2024-12-15 05:57:07.848562] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.383 [2024-12-15 05:57:07.849060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.383 [2024-12-15 05:57:07.849093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.383 [2024-12-15 05:57:07.853963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.383 [2024-12-15 05:57:07.854274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.383 [2024-12-15 05:57:07.854301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.383 [2024-12-15 05:57:07.858746] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.383 [2024-12-15 05:57:07.859073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.383 [2024-12-15 05:57:07.859100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.383 [2024-12-15 05:57:07.863660] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.383 [2024-12-15 05:57:07.864013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.383 [2024-12-15 05:57:07.864040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.383 [2024-12-15 05:57:07.868448] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.383 [2024-12-15 05:57:07.868912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.383 [2024-12-15 05:57:07.868973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.383 [2024-12-15 05:57:07.873699] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.383 [2024-12-15 05:57:07.874010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.383 [2024-12-15 05:57:07.874037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.383 [2024-12-15 05:57:07.878668] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.383 [2024-12-15 05:57:07.879012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.383 [2024-12-15 05:57:07.879039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.383 [2024-12-15 05:57:07.883899] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.383 [2024-12-15 05:57:07.884435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.383 [2024-12-15 05:57:07.884467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.383 [2024-12-15 05:57:07.889092] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.383 [2024-12-15 05:57:07.889398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.383 [2024-12-15 05:57:07.889424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.383 [2024-12-15 05:57:07.894203] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.383 [2024-12-15 05:57:07.894489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.383 [2024-12-15 05:57:07.894515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.383 [2024-12-15 05:57:07.899016] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.383 [2024-12-15 05:57:07.899347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.383 [2024-12-15 05:57:07.899376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.383 [2024-12-15 05:57:07.903777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.383 [2024-12-15 05:57:07.904259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.383 [2024-12-15 05:57:07.904291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.383 [2024-12-15 05:57:07.908669] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.383 [2024-12-15 05:57:07.908989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.383 [2024-12-15 05:57:07.909016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.383 [2024-12-15 05:57:07.913390] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.383 [2024-12-15 05:57:07.913740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.383 [2024-12-15 05:57:07.913769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.383 [2024-12-15 05:57:07.918270] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.383 [2024-12-15 05:57:07.918560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.383 [2024-12-15 05:57:07.918587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.383 [2024-12-15 05:57:07.922960] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.383 [2024-12-15 05:57:07.923299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.383 [2024-12-15 05:57:07.923327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.383 [2024-12-15 05:57:07.927890] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.383 [2024-12-15 05:57:07.928378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.383 [2024-12-15 05:57:07.928410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.383 [2024-12-15 05:57:07.932813] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.383 [2024-12-15 05:57:07.933145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.383 [2024-12-15 05:57:07.933193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.383 [2024-12-15 05:57:07.937717] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.383 [2024-12-15 05:57:07.938040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.383 [2024-12-15 05:57:07.938067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.383 [2024-12-15 05:57:07.942557] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.383 [2024-12-15 05:57:07.942842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.383 [2024-12-15 05:57:07.942879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.383 [2024-12-15 05:57:07.947580] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.383 [2024-12-15 05:57:07.948083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.383 [2024-12-15 05:57:07.948115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.383 [2024-12-15 05:57:07.952794] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.383 [2024-12-15 05:57:07.953148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.383 [2024-12-15 05:57:07.953195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.383 [2024-12-15 05:57:07.957693] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.383 [2024-12-15 05:57:07.958015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.383 [2024-12-15 05:57:07.958043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.384 [2024-12-15 05:57:07.962401] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.384 [2024-12-15 05:57:07.962680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.384 [2024-12-15 05:57:07.962706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.384 [2024-12-15 05:57:07.967310] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.384 [2024-12-15 05:57:07.967614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.384 [2024-12-15 05:57:07.967640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.384 [2024-12-15 05:57:07.972432] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.384 [2024-12-15 05:57:07.972737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.384 [2024-12-15 05:57:07.972763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.384 [2024-12-15 05:57:07.977522] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.384 [2024-12-15 05:57:07.977815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.384 [2024-12-15 05:57:07.977846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.384 [2024-12-15 05:57:07.982728] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.384 [2024-12-15 05:57:07.983233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.384 [2024-12-15 05:57:07.983266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.384 [2024-12-15 05:57:07.988136] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.384 [2024-12-15 05:57:07.988512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.384 [2024-12-15 05:57:07.988541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.384 [2024-12-15 05:57:07.993410] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.384 [2024-12-15 05:57:07.993736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.384 [2024-12-15 05:57:07.993763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.384 [2024-12-15 05:57:07.998679] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.384 [2024-12-15 05:57:07.999211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.384 [2024-12-15 05:57:07.999245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.384 [2024-12-15 05:57:08.003942] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.384 [2024-12-15 05:57:08.004265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.384 [2024-12-15 05:57:08.004303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.384 [2024-12-15 05:57:08.008761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.384 [2024-12-15 05:57:08.009096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.384 [2024-12-15 05:57:08.009127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.384 [2024-12-15 05:57:08.013522] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.384 [2024-12-15 05:57:08.013804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.384 [2024-12-15 05:57:08.013830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.384 [2024-12-15 05:57:08.018600] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.384 [2024-12-15 05:57:08.019067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.384 [2024-12-15 05:57:08.019099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.721 [2024-12-15 05:57:08.023737] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.721 [2024-12-15 05:57:08.024076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.721 [2024-12-15 05:57:08.024108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.721 [2024-12-15 05:57:08.028820] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.721 [2024-12-15 05:57:08.029195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.721 [2024-12-15 05:57:08.029228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.721 [2024-12-15 05:57:08.034039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.721 [2024-12-15 05:57:08.034359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.721 [2024-12-15 05:57:08.034387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.721 [2024-12-15 05:57:08.038756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.721 [2024-12-15 05:57:08.039236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.721 [2024-12-15 05:57:08.039270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.721 [2024-12-15 05:57:08.043800] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.721 [2024-12-15 05:57:08.044144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.721 [2024-12-15 05:57:08.044175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.721 [2024-12-15 05:57:08.048522] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.721 [2024-12-15 05:57:08.048808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.721 [2024-12-15 05:57:08.048835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.721 [2024-12-15 05:57:08.053393] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.721 [2024-12-15 05:57:08.053674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.721 [2024-12-15 05:57:08.053700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.721 [2024-12-15 05:57:08.058069] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.721 [2024-12-15 05:57:08.058370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.721 [2024-12-15 05:57:08.058396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.721 [2024-12-15 05:57:08.062694] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.721 [2024-12-15 05:57:08.063214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.721 [2024-12-15 05:57:08.063247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.721 [2024-12-15 05:57:08.067647] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.721 [2024-12-15 05:57:08.067946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.721 [2024-12-15 05:57:08.067981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.721 [2024-12-15 05:57:08.072343] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.721 [2024-12-15 05:57:08.072638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.721 [2024-12-15 05:57:08.072684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.721 [2024-12-15 05:57:08.077216] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.721 [2024-12-15 05:57:08.077493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.721 [2024-12-15 05:57:08.077519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.721 [2024-12-15 05:57:08.081877] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.721 [2024-12-15 05:57:08.082236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.721 [2024-12-15 05:57:08.082324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.721 [2024-12-15 05:57:08.086609] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.721 [2024-12-15 05:57:08.087065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.722 [2024-12-15 05:57:08.087097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.722 [2024-12-15 05:57:08.091662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.722 [2024-12-15 05:57:08.091960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.722 [2024-12-15 05:57:08.091996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.722 [2024-12-15 05:57:08.096321] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.722 [2024-12-15 05:57:08.096642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.722 [2024-12-15 05:57:08.096669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.722 [2024-12-15 05:57:08.101500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.722 [2024-12-15 05:57:08.101810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.722 [2024-12-15 05:57:08.101837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.722 [2024-12-15 05:57:08.106474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.722 [2024-12-15 05:57:08.106989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.722 [2024-12-15 05:57:08.107034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.722 [2024-12-15 05:57:08.111746] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.722 [2024-12-15 05:57:08.112105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.722 [2024-12-15 05:57:08.112135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.722 [2024-12-15 05:57:08.116811] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.722 [2024-12-15 05:57:08.117160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.722 [2024-12-15 05:57:08.117190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.722 [2024-12-15 05:57:08.121582] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.722 [2024-12-15 05:57:08.122119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.722 [2024-12-15 05:57:08.122152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.722 [2024-12-15 05:57:08.126524] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.722 [2024-12-15 05:57:08.126807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.722 [2024-12-15 05:57:08.126833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.722 [2024-12-15 05:57:08.131283] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.722 [2024-12-15 05:57:08.131621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.722 [2024-12-15 05:57:08.131647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.722 [2024-12-15 05:57:08.136103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.722 [2024-12-15 05:57:08.136382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.722 [2024-12-15 05:57:08.136408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.722 [2024-12-15 05:57:08.140865] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.722 [2024-12-15 05:57:08.141225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.722 [2024-12-15 05:57:08.141257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.722 [2024-12-15 05:57:08.145673] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.722 [2024-12-15 05:57:08.146178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.722 [2024-12-15 05:57:08.146210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.722 [2024-12-15 05:57:08.150593] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.722 [2024-12-15 05:57:08.150889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.722 [2024-12-15 05:57:08.150925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.722 [2024-12-15 05:57:08.155347] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.722 [2024-12-15 05:57:08.155708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.722 [2024-12-15 05:57:08.155735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.722 [2024-12-15 05:57:08.160154] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.722 [2024-12-15 05:57:08.160435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.722 [2024-12-15 05:57:08.160460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.722 [2024-12-15 05:57:08.164831] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.722 [2024-12-15 05:57:08.165143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.722 [2024-12-15 05:57:08.165173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.722 [2024-12-15 05:57:08.169560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.722 [2024-12-15 05:57:08.170095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.722 [2024-12-15 05:57:08.170127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.722 [2024-12-15 05:57:08.174515] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.722 [2024-12-15 05:57:08.174797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.722 [2024-12-15 05:57:08.174822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.722 [2024-12-15 05:57:08.179198] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.722 [2024-12-15 05:57:08.179569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.722 [2024-12-15 05:57:08.179595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.722 [2024-12-15 05:57:08.183912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.722 [2024-12-15 05:57:08.184250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.722 [2024-12-15 05:57:08.184281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.722 [2024-12-15 05:57:08.188622] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.722 [2024-12-15 05:57:08.188934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.722 [2024-12-15 05:57:08.188961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.722 [2024-12-15 05:57:08.193343] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.722 [2024-12-15 05:57:08.193642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.722 [2024-12-15 05:57:08.193669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.722 [2024-12-15 05:57:08.198077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.722 [2024-12-15 05:57:08.198376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.722 [2024-12-15 05:57:08.198401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.722 [2024-12-15 05:57:08.202712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.722 [2024-12-15 05:57:08.203024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.722 [2024-12-15 05:57:08.203060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.722 [2024-12-15 05:57:08.207485] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.722 [2024-12-15 05:57:08.207803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.722 [2024-12-15 05:57:08.207830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.722 [2024-12-15 05:57:08.212315] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.722 [2024-12-15 05:57:08.212592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.722 [2024-12-15 05:57:08.212618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.722 [2024-12-15 05:57:08.216992] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.722 [2024-12-15 05:57:08.217270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.722 [2024-12-15 05:57:08.217296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.722 [2024-12-15 05:57:08.221536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.723 [2024-12-15 05:57:08.222060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.723 [2024-12-15 05:57:08.222093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.723 [2024-12-15 05:57:08.226512] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.723 [2024-12-15 05:57:08.226794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.723 [2024-12-15 05:57:08.226819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.723 [2024-12-15 05:57:08.231122] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.723 [2024-12-15 05:57:08.231457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.723 [2024-12-15 05:57:08.231514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.723 [2024-12-15 05:57:08.235951] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.723 [2024-12-15 05:57:08.236306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.723 [2024-12-15 05:57:08.236340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.723 [2024-12-15 05:57:08.240911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.723 [2024-12-15 05:57:08.241192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.723 [2024-12-15 05:57:08.241218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.723 [2024-12-15 05:57:08.245623] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.723 [2024-12-15 05:57:08.246102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.723 [2024-12-15 05:57:08.246134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.723 [2024-12-15 05:57:08.251267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.723 [2024-12-15 05:57:08.251651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.723 [2024-12-15 05:57:08.251681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.723 [2024-12-15 05:57:08.256467] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.723 [2024-12-15 05:57:08.256836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.723 [2024-12-15 05:57:08.256866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.723 [2024-12-15 05:57:08.262143] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.723 [2024-12-15 05:57:08.262500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.723 [2024-12-15 05:57:08.262529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.723 [2024-12-15 05:57:08.267365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.723 [2024-12-15 05:57:08.267703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.723 [2024-12-15 05:57:08.267732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.723 [2024-12-15 05:57:08.272918] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.723 [2024-12-15 05:57:08.273272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.723 [2024-12-15 05:57:08.273302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.723 [2024-12-15 05:57:08.278502] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.723 [2024-12-15 05:57:08.278858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.723 [2024-12-15 05:57:08.278912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.723 [2024-12-15 05:57:08.284006] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.723 [2024-12-15 05:57:08.284342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.723 [2024-12-15 05:57:08.284371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.723 [2024-12-15 05:57:08.289397] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.723 [2024-12-15 05:57:08.289869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.723 [2024-12-15 05:57:08.289912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.723 [2024-12-15 05:57:08.294697] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.723 [2024-12-15 05:57:08.295010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.723 [2024-12-15 05:57:08.295049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.723 [2024-12-15 05:57:08.299699] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.723 [2024-12-15 05:57:08.300059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.723 [2024-12-15 05:57:08.300093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.723 [2024-12-15 05:57:08.304877] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.723 [2024-12-15 05:57:08.305193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.723 [2024-12-15 05:57:08.305235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.723 [2024-12-15 05:57:08.310048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.723 [2024-12-15 05:57:08.310329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.723 [2024-12-15 05:57:08.310356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.723 [2024-12-15 05:57:08.315231] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.723 [2024-12-15 05:57:08.315541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.723 [2024-12-15 05:57:08.315584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.723 [2024-12-15 05:57:08.320761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.723 [2024-12-15 05:57:08.321106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.723 [2024-12-15 05:57:08.321139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.723 [2024-12-15 05:57:08.326134] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.723 [2024-12-15 05:57:08.326502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.723 [2024-12-15 05:57:08.326530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.723 [2024-12-15 05:57:08.332081] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.723 [2024-12-15 05:57:08.332425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.723 [2024-12-15 05:57:08.332453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.723 [2024-12-15 05:57:08.337771] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.723 [2024-12-15 05:57:08.338272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.723 [2024-12-15 05:57:08.338304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.723 [2024-12-15 05:57:08.343088] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.723 [2024-12-15 05:57:08.343441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.723 [2024-12-15 05:57:08.343470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.984 [2024-12-15 05:57:08.348476] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.984 [2024-12-15 05:57:08.348803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.984 [2024-12-15 05:57:08.348831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.984 [2024-12-15 05:57:08.353448] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.984 [2024-12-15 05:57:08.353726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.984 [2024-12-15 05:57:08.353753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.984 [2024-12-15 05:57:08.358080] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.984 [2024-12-15 05:57:08.358346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.984 [2024-12-15 05:57:08.358373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.984 [2024-12-15 05:57:08.362610] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.984 [2024-12-15 05:57:08.362887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.984 [2024-12-15 05:57:08.362923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.984 [2024-12-15 05:57:08.367729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.984 [2024-12-15 05:57:08.368043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.984 [2024-12-15 05:57:08.368070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.984 [2024-12-15 05:57:08.372403] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.984 [2024-12-15 05:57:08.372701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.984 [2024-12-15 05:57:08.372728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.984 [2024-12-15 05:57:08.377135] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.984 [2024-12-15 05:57:08.377416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.984 [2024-12-15 05:57:08.377441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.984 [2024-12-15 05:57:08.381726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.984 [2024-12-15 05:57:08.382200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.984 [2024-12-15 05:57:08.382233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.984 [2024-12-15 05:57:08.386584] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.984 [2024-12-15 05:57:08.386866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.984 [2024-12-15 05:57:08.386901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.984 [2024-12-15 05:57:08.391321] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.984 [2024-12-15 05:57:08.391669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.984 [2024-12-15 05:57:08.391695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.984 [2024-12-15 05:57:08.396145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.984 [2024-12-15 05:57:08.396425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.984 [2024-12-15 05:57:08.396450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.984 [2024-12-15 05:57:08.400729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.984 [2024-12-15 05:57:08.401112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.984 [2024-12-15 05:57:08.401159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.984 [2024-12-15 05:57:08.405648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.984 [2024-12-15 05:57:08.406101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.984 [2024-12-15 05:57:08.406133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.984 [2024-12-15 05:57:08.410572] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.984 [2024-12-15 05:57:08.410853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.984 [2024-12-15 05:57:08.410889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.984 [2024-12-15 05:57:08.415622] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.984 [2024-12-15 05:57:08.415928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.984 [2024-12-15 05:57:08.415965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.984 [2024-12-15 05:57:08.420409] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.984 [2024-12-15 05:57:08.420716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.984 [2024-12-15 05:57:08.420743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.984 [2024-12-15 05:57:08.425192] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.984 [2024-12-15 05:57:08.425476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.984 [2024-12-15 05:57:08.425503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.984 [2024-12-15 05:57:08.429858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.984 [2024-12-15 05:57:08.430362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.984 [2024-12-15 05:57:08.430393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.984 [2024-12-15 05:57:08.434970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.985 [2024-12-15 05:57:08.435314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.985 [2024-12-15 05:57:08.435342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.985 [2024-12-15 05:57:08.439777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.985 [2024-12-15 05:57:08.440114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.985 [2024-12-15 05:57:08.440145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.985 [2024-12-15 05:57:08.444587] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.985 [2024-12-15 05:57:08.444890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.985 [2024-12-15 05:57:08.444925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.985 [2024-12-15 05:57:08.449346] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.985 [2024-12-15 05:57:08.449630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.985 [2024-12-15 05:57:08.449657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.985 [2024-12-15 05:57:08.454025] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.985 [2024-12-15 05:57:08.454291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.985 [2024-12-15 05:57:08.454317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.985 [2024-12-15 05:57:08.458608] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.985 [2024-12-15 05:57:08.458885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.985 [2024-12-15 05:57:08.458922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.985 [2024-12-15 05:57:08.463253] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.985 [2024-12-15 05:57:08.463552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.985 [2024-12-15 05:57:08.463591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.985 [2024-12-15 05:57:08.467886] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.985 [2024-12-15 05:57:08.468225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.985 [2024-12-15 05:57:08.468255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.985 [2024-12-15 05:57:08.472656] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.985 [2024-12-15 05:57:08.472972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.985 [2024-12-15 05:57:08.472998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.985 [2024-12-15 05:57:08.477432] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.985 [2024-12-15 05:57:08.477862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.985 [2024-12-15 05:57:08.477901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.985 [2024-12-15 05:57:08.482245] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.985 [2024-12-15 05:57:08.482539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.985 [2024-12-15 05:57:08.482565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.985 [2024-12-15 05:57:08.486821] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.985 [2024-12-15 05:57:08.487184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.985 [2024-12-15 05:57:08.487215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.985 [2024-12-15 05:57:08.491568] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.985 [2024-12-15 05:57:08.491845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.985 [2024-12-15 05:57:08.491882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.985 [2024-12-15 05:57:08.496234] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.985 [2024-12-15 05:57:08.496510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.985 [2024-12-15 05:57:08.496536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.985 [2024-12-15 05:57:08.500823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.985 [2024-12-15 05:57:08.501184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.985 [2024-12-15 05:57:08.501216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.985 [2024-12-15 05:57:08.505551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.985 [2024-12-15 05:57:08.506078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.985 [2024-12-15 05:57:08.506110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.985 [2024-12-15 05:57:08.510423] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.985 [2024-12-15 05:57:08.510712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.985 [2024-12-15 05:57:08.510740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.985 [2024-12-15 05:57:08.515237] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.985 [2024-12-15 05:57:08.515546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.985 [2024-12-15 05:57:08.515587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.985 [2024-12-15 05:57:08.519919] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.985 [2024-12-15 05:57:08.520252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.985 [2024-12-15 05:57:08.520283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.985 [2024-12-15 05:57:08.524727] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.985 [2024-12-15 05:57:08.525040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.985 [2024-12-15 05:57:08.525065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.985 [2024-12-15 05:57:08.529391] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.985 [2024-12-15 05:57:08.529687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.985 [2024-12-15 05:57:08.529714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.985 [2024-12-15 05:57:08.534635] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.985 [2024-12-15 05:57:08.534984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.985 [2024-12-15 05:57:08.535012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.985 [2024-12-15 05:57:08.539793] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.985 [2024-12-15 05:57:08.540174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.985 [2024-12-15 05:57:08.540205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.985 [2024-12-15 05:57:08.545256] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.985 [2024-12-15 05:57:08.545643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.985 [2024-12-15 05:57:08.545685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.985 [2024-12-15 05:57:08.550465] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.985 [2024-12-15 05:57:08.550785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.985 [2024-12-15 05:57:08.550813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.985 [2024-12-15 05:57:08.555552] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.985 [2024-12-15 05:57:08.555844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.985 [2024-12-15 05:57:08.555881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.985 [2024-12-15 05:57:08.560199] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.985 [2024-12-15 05:57:08.560477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.985 [2024-12-15 05:57:08.560502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.985 [2024-12-15 05:57:08.564887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.985 [2024-12-15 05:57:08.565176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.986 [2024-12-15 05:57:08.565202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.986 [2024-12-15 05:57:08.569451] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.986 [2024-12-15 05:57:08.569892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.986 [2024-12-15 05:57:08.569948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.986 [2024-12-15 05:57:08.574371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.986 [2024-12-15 05:57:08.574650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.986 [2024-12-15 05:57:08.574676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.986 [2024-12-15 05:57:08.579009] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.986 [2024-12-15 05:57:08.579335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.986 [2024-12-15 05:57:08.579362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.986 [2024-12-15 05:57:08.583647] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.986 [2024-12-15 05:57:08.583941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.986 [2024-12-15 05:57:08.583975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.986 [2024-12-15 05:57:08.588449] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.986 [2024-12-15 05:57:08.588755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.986 [2024-12-15 05:57:08.588782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.986 [2024-12-15 05:57:08.593225] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.986 [2024-12-15 05:57:08.593509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.986 [2024-12-15 05:57:08.593536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.986 [2024-12-15 05:57:08.598381] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.986 [2024-12-15 05:57:08.598686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.986 [2024-12-15 05:57:08.598713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.986 [2024-12-15 05:57:08.603514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.986 [2024-12-15 05:57:08.603847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.986 [2024-12-15 05:57:08.603882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.986 [2024-12-15 05:57:08.608230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.986 [2024-12-15 05:57:08.608509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.986 [2024-12-15 05:57:08.608536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.986 [2024-12-15 05:57:08.612911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.986 [2024-12-15 05:57:08.613204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.986 [2024-12-15 05:57:08.613231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.986 [2024-12-15 05:57:08.617671] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:46.986 [2024-12-15 05:57:08.618026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.986 [2024-12-15 05:57:08.618047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.246 [2024-12-15 05:57:08.622774] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.247 [2024-12-15 05:57:08.623087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.247 [2024-12-15 05:57:08.623110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.247 [2024-12-15 05:57:08.627724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.247 [2024-12-15 05:57:08.628265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.247 [2024-12-15 05:57:08.628340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.247 [2024-12-15 05:57:08.632696] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.247 [2024-12-15 05:57:08.633028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.247 [2024-12-15 05:57:08.633057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.247 [2024-12-15 05:57:08.637525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.247 [2024-12-15 05:57:08.637806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.247 [2024-12-15 05:57:08.637833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.247 [2024-12-15 05:57:08.642278] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.247 [2024-12-15 05:57:08.642572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.247 [2024-12-15 05:57:08.642599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.247 [2024-12-15 05:57:08.646844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.247 [2024-12-15 05:57:08.647205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.247 [2024-12-15 05:57:08.647238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.247 [2024-12-15 05:57:08.651570] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.247 [2024-12-15 05:57:08.652072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.247 [2024-12-15 05:57:08.652103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.247 [2024-12-15 05:57:08.656525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.247 [2024-12-15 05:57:08.656823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.247 [2024-12-15 05:57:08.656850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.247 [2024-12-15 05:57:08.661365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.247 [2024-12-15 05:57:08.661643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.247 [2024-12-15 05:57:08.661669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.247 [2024-12-15 05:57:08.666027] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.247 [2024-12-15 05:57:08.666307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.247 [2024-12-15 05:57:08.666333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.247 [2024-12-15 05:57:08.670627] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.247 [2024-12-15 05:57:08.670918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.247 [2024-12-15 05:57:08.670943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.247 [2024-12-15 05:57:08.675267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.247 [2024-12-15 05:57:08.675597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.247 [2024-12-15 05:57:08.675623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.247 [2024-12-15 05:57:08.679993] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.247 [2024-12-15 05:57:08.680264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.247 [2024-12-15 05:57:08.680290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.247 [2024-12-15 05:57:08.684597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.247 [2024-12-15 05:57:08.684882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.247 [2024-12-15 05:57:08.684920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.247 [2024-12-15 05:57:08.689268] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.247 [2024-12-15 05:57:08.689553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.247 [2024-12-15 05:57:08.689593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.247 [2024-12-15 05:57:08.693968] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.247 [2024-12-15 05:57:08.694249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.247 [2024-12-15 05:57:08.694275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.247 [2024-12-15 05:57:08.698491] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.247 [2024-12-15 05:57:08.698769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.247 [2024-12-15 05:57:08.698794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.247 [2024-12-15 05:57:08.703232] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.247 [2024-12-15 05:57:08.703551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.247 [2024-12-15 05:57:08.703578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.247 [2024-12-15 05:57:08.708072] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.247 [2024-12-15 05:57:08.708385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.247 [2024-12-15 05:57:08.708411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.247 [2024-12-15 05:57:08.713134] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.247 [2024-12-15 05:57:08.713424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.247 [2024-12-15 05:57:08.713450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.247 [2024-12-15 05:57:08.718224] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.247 [2024-12-15 05:57:08.718560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.247 [2024-12-15 05:57:08.718590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.247 [2024-12-15 05:57:08.723380] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.247 [2024-12-15 05:57:08.723890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.247 [2024-12-15 05:57:08.723952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.247 [2024-12-15 05:57:08.728719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.247 [2024-12-15 05:57:08.729047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.247 [2024-12-15 05:57:08.729075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.247 [2024-12-15 05:57:08.733627] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.247 [2024-12-15 05:57:08.733943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.247 [2024-12-15 05:57:08.733969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.247 [2024-12-15 05:57:08.738467] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.247 [2024-12-15 05:57:08.738902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.247 [2024-12-15 05:57:08.738960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.247 [2024-12-15 05:57:08.743584] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.247 [2024-12-15 05:57:08.743873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.247 [2024-12-15 05:57:08.743925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.247 [2024-12-15 05:57:08.748598] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.247 [2024-12-15 05:57:08.748884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.247 [2024-12-15 05:57:08.748919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.247 [2024-12-15 05:57:08.753378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.247 [2024-12-15 05:57:08.753683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.248 [2024-12-15 05:57:08.753709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.248 [2024-12-15 05:57:08.758112] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.248 [2024-12-15 05:57:08.758382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.248 [2024-12-15 05:57:08.758408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.248 [2024-12-15 05:57:08.762805] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.248 [2024-12-15 05:57:08.763314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.248 [2024-12-15 05:57:08.763346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.248 [2024-12-15 05:57:08.767932] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.248 [2024-12-15 05:57:08.768230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.248 [2024-12-15 05:57:08.768256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.248 [2024-12-15 05:57:08.772574] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.248 [2024-12-15 05:57:08.772861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.248 [2024-12-15 05:57:08.772895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.248 [2024-12-15 05:57:08.777371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.248 [2024-12-15 05:57:08.777656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.248 [2024-12-15 05:57:08.777682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.248 [2024-12-15 05:57:08.782505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.248 [2024-12-15 05:57:08.783027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.248 [2024-12-15 05:57:08.783057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.248 [2024-12-15 05:57:08.787910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.248 [2024-12-15 05:57:08.788242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.248 [2024-12-15 05:57:08.788268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.248 [2024-12-15 05:57:08.793100] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.248 [2024-12-15 05:57:08.793408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.248 [2024-12-15 05:57:08.793435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.248 [2024-12-15 05:57:08.798079] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.248 [2024-12-15 05:57:08.798351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.248 [2024-12-15 05:57:08.798377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.248 [2024-12-15 05:57:08.802713] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.248 [2024-12-15 05:57:08.803221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.248 [2024-12-15 05:57:08.803255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.248 [2024-12-15 05:57:08.807837] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.248 [2024-12-15 05:57:08.808147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.248 [2024-12-15 05:57:08.808173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.248 [2024-12-15 05:57:08.812495] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.248 [2024-12-15 05:57:08.812788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.248 [2024-12-15 05:57:08.812814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.248 [2024-12-15 05:57:08.817151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.248 [2024-12-15 05:57:08.817468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.248 [2024-12-15 05:57:08.817495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.248 [2024-12-15 05:57:08.821854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.248 [2024-12-15 05:57:08.822157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.248 [2024-12-15 05:57:08.822183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.248 [2024-12-15 05:57:08.826519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.248 [2024-12-15 05:57:08.826936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.248 [2024-12-15 05:57:08.826966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.248 [2024-12-15 05:57:08.831276] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.248 [2024-12-15 05:57:08.831574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.248 [2024-12-15 05:57:08.831600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.248 [2024-12-15 05:57:08.836101] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.248 [2024-12-15 05:57:08.836390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.248 [2024-12-15 05:57:08.836416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.248 [2024-12-15 05:57:08.840679] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.248 [2024-12-15 05:57:08.840981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.248 [2024-12-15 05:57:08.841007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.248 [2024-12-15 05:57:08.845284] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.248 [2024-12-15 05:57:08.845570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.248 [2024-12-15 05:57:08.845596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.248 [2024-12-15 05:57:08.850062] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.248 [2024-12-15 05:57:08.850334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.248 [2024-12-15 05:57:08.850359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.248 [2024-12-15 05:57:08.854829] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.248 [2024-12-15 05:57:08.855291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.248 [2024-12-15 05:57:08.855323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.248 [2024-12-15 05:57:08.860318] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.248 [2024-12-15 05:57:08.860635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.248 [2024-12-15 05:57:08.860663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.248 [2024-12-15 05:57:08.865279] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.248 [2024-12-15 05:57:08.865568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.248 [2024-12-15 05:57:08.865595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.248 [2024-12-15 05:57:08.869989] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.248 [2024-12-15 05:57:08.870300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.248 [2024-12-15 05:57:08.870328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.248 [2024-12-15 05:57:08.874669] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.248 [2024-12-15 05:57:08.875097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.248 [2024-12-15 05:57:08.875121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.248 [2024-12-15 05:57:08.879829] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.248 [2024-12-15 05:57:08.880220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.248 [2024-12-15 05:57:08.880264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.510 [2024-12-15 05:57:08.885217] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.510 [2024-12-15 05:57:08.885515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.510 [2024-12-15 05:57:08.885553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.510 [2024-12-15 05:57:08.890307] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.510 [2024-12-15 05:57:08.890650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.510 [2024-12-15 05:57:08.890681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.510 [2024-12-15 05:57:08.894961] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.510 [2024-12-15 05:57:08.895306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.510 [2024-12-15 05:57:08.895335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.510 [2024-12-15 05:57:08.899751] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.510 [2024-12-15 05:57:08.900043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.510 [2024-12-15 05:57:08.900069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.510 [2024-12-15 05:57:08.904461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.510 [2024-12-15 05:57:08.904742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.510 [2024-12-15 05:57:08.904768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.510 [2024-12-15 05:57:08.909105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.510 [2024-12-15 05:57:08.909385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.510 [2024-12-15 05:57:08.909411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.510 [2024-12-15 05:57:08.913665] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.510 [2024-12-15 05:57:08.913961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.510 [2024-12-15 05:57:08.913991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.510 [2024-12-15 05:57:08.918478] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.510 [2024-12-15 05:57:08.918912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.510 [2024-12-15 05:57:08.918935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.510 [2024-12-15 05:57:08.923356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.510 [2024-12-15 05:57:08.923683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.510 [2024-12-15 05:57:08.923709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.510 [2024-12-15 05:57:08.927963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.510 [2024-12-15 05:57:08.928242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.511 [2024-12-15 05:57:08.928267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.511 [2024-12-15 05:57:08.932493] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.511 [2024-12-15 05:57:08.932771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.511 [2024-12-15 05:57:08.932797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.511 [2024-12-15 05:57:08.937138] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.511 [2024-12-15 05:57:08.937434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.511 [2024-12-15 05:57:08.937460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.511 [2024-12-15 05:57:08.941700] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.511 [2024-12-15 05:57:08.942032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.511 [2024-12-15 05:57:08.942053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.511 [2024-12-15 05:57:08.946385] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.511 [2024-12-15 05:57:08.946673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.511 [2024-12-15 05:57:08.946700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.511 [2024-12-15 05:57:08.951290] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.511 [2024-12-15 05:57:08.951659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.511 [2024-12-15 05:57:08.951686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.511 [2024-12-15 05:57:08.956511] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.511 [2024-12-15 05:57:08.956848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.511 [2024-12-15 05:57:08.956886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.511 [2024-12-15 05:57:08.961907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.511 [2024-12-15 05:57:08.962276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.511 [2024-12-15 05:57:08.962309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.511 [2024-12-15 05:57:08.967106] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.511 [2024-12-15 05:57:08.967452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.511 [2024-12-15 05:57:08.967491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.511 [2024-12-15 05:57:08.972440] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.511 [2024-12-15 05:57:08.972786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.511 [2024-12-15 05:57:08.972814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.511 [2024-12-15 05:57:08.977403] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.511 [2024-12-15 05:57:08.977715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.511 [2024-12-15 05:57:08.977742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.511 [2024-12-15 05:57:08.982251] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.511 [2024-12-15 05:57:08.982554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.511 [2024-12-15 05:57:08.982580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.511 [2024-12-15 05:57:08.987343] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.511 [2024-12-15 05:57:08.987716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.511 [2024-12-15 05:57:08.987744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.511 [2024-12-15 05:57:08.992492] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.511 [2024-12-15 05:57:08.992784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.511 [2024-12-15 05:57:08.992811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.511 [2024-12-15 05:57:08.997343] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.511 [2024-12-15 05:57:08.997673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.511 [2024-12-15 05:57:08.997716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.511 [2024-12-15 05:57:09.002646] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.511 [2024-12-15 05:57:09.003125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.511 [2024-12-15 05:57:09.003183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.511 [2024-12-15 05:57:09.007877] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.511 [2024-12-15 05:57:09.008245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.511 [2024-12-15 05:57:09.008296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.511 [2024-12-15 05:57:09.012878] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.511 [2024-12-15 05:57:09.013266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.511 [2024-12-15 05:57:09.013312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.511 [2024-12-15 05:57:09.018180] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.511 [2024-12-15 05:57:09.018500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.511 [2024-12-15 05:57:09.018527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.511 [2024-12-15 05:57:09.023184] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.511 [2024-12-15 05:57:09.023509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.511 [2024-12-15 05:57:09.023551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.511 [2024-12-15 05:57:09.028548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.511 [2024-12-15 05:57:09.028920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.511 [2024-12-15 05:57:09.028967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.511 [2024-12-15 05:57:09.033438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.511 [2024-12-15 05:57:09.033911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.511 [2024-12-15 05:57:09.033955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.511 [2024-12-15 05:57:09.038358] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.511 [2024-12-15 05:57:09.038645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.511 [2024-12-15 05:57:09.038671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.511 [2024-12-15 05:57:09.042903] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.511 [2024-12-15 05:57:09.043247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.511 [2024-12-15 05:57:09.043275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.511 [2024-12-15 05:57:09.047651] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.511 [2024-12-15 05:57:09.047938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.511 [2024-12-15 05:57:09.047992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.511 [2024-12-15 05:57:09.052569] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.511 [2024-12-15 05:57:09.052910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.511 [2024-12-15 05:57:09.052947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.511 [2024-12-15 05:57:09.057784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.511 [2024-12-15 05:57:09.058302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.511 [2024-12-15 05:57:09.058335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.511 [2024-12-15 05:57:09.063002] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.511 [2024-12-15 05:57:09.063340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.511 [2024-12-15 05:57:09.063367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.512 [2024-12-15 05:57:09.067904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.512 [2024-12-15 05:57:09.068262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.512 [2024-12-15 05:57:09.068294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.512 [2024-12-15 05:57:09.072702] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.512 [2024-12-15 05:57:09.073007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.512 [2024-12-15 05:57:09.073034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.512 [2024-12-15 05:57:09.077421] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.512 [2024-12-15 05:57:09.077861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.512 [2024-12-15 05:57:09.077901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.512 [2024-12-15 05:57:09.082286] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.512 [2024-12-15 05:57:09.082574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.512 [2024-12-15 05:57:09.082600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.512 [2024-12-15 05:57:09.087071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.512 [2024-12-15 05:57:09.087398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.512 [2024-12-15 05:57:09.087425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.512 [2024-12-15 05:57:09.091807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.512 [2024-12-15 05:57:09.092145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.512 [2024-12-15 05:57:09.092176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.512 [2024-12-15 05:57:09.096434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.512 [2024-12-15 05:57:09.096719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.512 [2024-12-15 05:57:09.096745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.512 [2024-12-15 05:57:09.101230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.512 [2024-12-15 05:57:09.101520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.512 [2024-12-15 05:57:09.101546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.512 [2024-12-15 05:57:09.105950] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.512 [2024-12-15 05:57:09.106247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.512 [2024-12-15 05:57:09.106273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.512 [2024-12-15 05:57:09.110514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.512 [2024-12-15 05:57:09.110801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.512 [2024-12-15 05:57:09.110828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.512 [2024-12-15 05:57:09.115832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.512 [2024-12-15 05:57:09.116232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.512 [2024-12-15 05:57:09.116266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.512 [2024-12-15 05:57:09.121201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.512 [2024-12-15 05:57:09.121488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.512 [2024-12-15 05:57:09.121515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.512 [2024-12-15 05:57:09.125915] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.512 [2024-12-15 05:57:09.126201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.512 [2024-12-15 05:57:09.126227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.512 [2024-12-15 05:57:09.130535] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.512 [2024-12-15 05:57:09.130814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.512 [2024-12-15 05:57:09.130839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.512 [2024-12-15 05:57:09.135266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.512 [2024-12-15 05:57:09.135621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.512 [2024-12-15 05:57:09.135646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.512 [2024-12-15 05:57:09.139973] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.512 [2024-12-15 05:57:09.140251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.512 [2024-12-15 05:57:09.140276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.512 [2024-12-15 05:57:09.144769] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.512 [2024-12-15 05:57:09.145183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.512 [2024-12-15 05:57:09.145216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.773 [2024-12-15 05:57:09.149844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.773 [2024-12-15 05:57:09.150324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.773 [2024-12-15 05:57:09.150356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.773 [2024-12-15 05:57:09.155126] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.773 [2024-12-15 05:57:09.155516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.773 [2024-12-15 05:57:09.155571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.773 [2024-12-15 05:57:09.159944] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.773 [2024-12-15 05:57:09.160280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.773 [2024-12-15 05:57:09.160334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.773 [2024-12-15 05:57:09.164575] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.773 [2024-12-15 05:57:09.164857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.773 [2024-12-15 05:57:09.164893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.773 [2024-12-15 05:57:09.169217] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.773 [2024-12-15 05:57:09.169496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.773 [2024-12-15 05:57:09.169522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.773 [2024-12-15 05:57:09.173841] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.773 [2024-12-15 05:57:09.174331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.773 [2024-12-15 05:57:09.174362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.773 [2024-12-15 05:57:09.178667] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.773 [2024-12-15 05:57:09.178960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.773 [2024-12-15 05:57:09.178986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.773 [2024-12-15 05:57:09.183335] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.773 [2024-12-15 05:57:09.183657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.773 [2024-12-15 05:57:09.183683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.773 [2024-12-15 05:57:09.188002] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.773 [2024-12-15 05:57:09.188267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.773 [2024-12-15 05:57:09.188292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.773 [2024-12-15 05:57:09.192677] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.773 [2024-12-15 05:57:09.192995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.773 [2024-12-15 05:57:09.193021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.773 [2024-12-15 05:57:09.197371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.773 [2024-12-15 05:57:09.197648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.773 [2024-12-15 05:57:09.197674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.773 [2024-12-15 05:57:09.201996] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.774 [2024-12-15 05:57:09.202275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.774 [2024-12-15 05:57:09.202300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.774 [2024-12-15 05:57:09.206466] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.774 [2024-12-15 05:57:09.206743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.774 [2024-12-15 05:57:09.206769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.774 [2024-12-15 05:57:09.211254] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.774 [2024-12-15 05:57:09.211581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.774 [2024-12-15 05:57:09.211607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.774 [2024-12-15 05:57:09.215906] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.774 [2024-12-15 05:57:09.216204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.774 [2024-12-15 05:57:09.216259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.774 [2024-12-15 05:57:09.220542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.774 [2024-12-15 05:57:09.220820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.774 [2024-12-15 05:57:09.220846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.774 [2024-12-15 05:57:09.225221] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.774 [2024-12-15 05:57:09.225498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.774 [2024-12-15 05:57:09.225525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.774 [2024-12-15 05:57:09.229901] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.774 [2024-12-15 05:57:09.230205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.774 [2024-12-15 05:57:09.230231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.774 [2024-12-15 05:57:09.234675] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.774 [2024-12-15 05:57:09.234981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.774 [2024-12-15 05:57:09.235008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.774 [2024-12-15 05:57:09.239750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.774 [2024-12-15 05:57:09.240123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.774 [2024-12-15 05:57:09.240171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.774 [2024-12-15 05:57:09.245166] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.774 [2024-12-15 05:57:09.245474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.774 [2024-12-15 05:57:09.245501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.774 [2024-12-15 05:57:09.250238] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.774 [2024-12-15 05:57:09.250545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.774 [2024-12-15 05:57:09.250589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.774 [2024-12-15 05:57:09.256133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.774 [2024-12-15 05:57:09.256477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.774 [2024-12-15 05:57:09.256513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.774 [2024-12-15 05:57:09.261655] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.774 [2024-12-15 05:57:09.262040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.774 [2024-12-15 05:57:09.262086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.774 [2024-12-15 05:57:09.267213] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.774 [2024-12-15 05:57:09.267547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.774 [2024-12-15 05:57:09.267593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.774 [2024-12-15 05:57:09.272728] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.774 [2024-12-15 05:57:09.273053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.774 [2024-12-15 05:57:09.273086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.774 [2024-12-15 05:57:09.278160] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.774 [2024-12-15 05:57:09.278501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.774 [2024-12-15 05:57:09.278551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.774 [2024-12-15 05:57:09.283802] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.774 [2024-12-15 05:57:09.284179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.774 [2024-12-15 05:57:09.284211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.774 [2024-12-15 05:57:09.289466] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.774 [2024-12-15 05:57:09.289831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.774 [2024-12-15 05:57:09.289881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.774 [2024-12-15 05:57:09.294790] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.774 [2024-12-15 05:57:09.295153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.774 [2024-12-15 05:57:09.295202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.774 [2024-12-15 05:57:09.299897] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.774 [2024-12-15 05:57:09.300278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.774 [2024-12-15 05:57:09.300309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.774 [2024-12-15 05:57:09.305088] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.774 [2024-12-15 05:57:09.305408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.774 [2024-12-15 05:57:09.305446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.774 [2024-12-15 05:57:09.310141] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.774 [2024-12-15 05:57:09.310478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.774 [2024-12-15 05:57:09.310509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.774 [2024-12-15 05:57:09.315066] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.774 [2024-12-15 05:57:09.315442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.774 [2024-12-15 05:57:09.315509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.774 [2024-12-15 05:57:09.320042] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.774 [2024-12-15 05:57:09.320393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.774 [2024-12-15 05:57:09.320430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.774 [2024-12-15 05:57:09.324939] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.774 [2024-12-15 05:57:09.325292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.774 [2024-12-15 05:57:09.325326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.774 [2024-12-15 05:57:09.330185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.774 [2024-12-15 05:57:09.330547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.774 [2024-12-15 05:57:09.330596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.774 [2024-12-15 05:57:09.335674] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.774 [2024-12-15 05:57:09.336037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.774 [2024-12-15 05:57:09.336062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.774 [2024-12-15 05:57:09.341070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.774 [2024-12-15 05:57:09.341420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.774 [2024-12-15 05:57:09.341455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.775 [2024-12-15 05:57:09.346185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.775 [2024-12-15 05:57:09.346534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.775 [2024-12-15 05:57:09.346569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.775 [2024-12-15 05:57:09.351030] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.775 [2024-12-15 05:57:09.351400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.775 [2024-12-15 05:57:09.351434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.775 [2024-12-15 05:57:09.356063] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.775 [2024-12-15 05:57:09.356389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.775 [2024-12-15 05:57:09.356422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.775 [2024-12-15 05:57:09.360770] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.775 [2024-12-15 05:57:09.361132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.775 [2024-12-15 05:57:09.361166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.775 [2024-12-15 05:57:09.365439] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.775 [2024-12-15 05:57:09.365797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.775 [2024-12-15 05:57:09.365835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.775 [2024-12-15 05:57:09.370169] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.775 [2024-12-15 05:57:09.370514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.775 [2024-12-15 05:57:09.370547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.775 [2024-12-15 05:57:09.375273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.775 [2024-12-15 05:57:09.375647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.775 [2024-12-15 05:57:09.375681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.775 [2024-12-15 05:57:09.380500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.775 [2024-12-15 05:57:09.380836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.775 [2024-12-15 05:57:09.380906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.775 [2024-12-15 05:57:09.385463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.775 [2024-12-15 05:57:09.385814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.775 [2024-12-15 05:57:09.385847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.775 [2024-12-15 05:57:09.390440] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.775 [2024-12-15 05:57:09.390776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.775 [2024-12-15 05:57:09.390807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.775 [2024-12-15 05:57:09.395119] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.775 [2024-12-15 05:57:09.395461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.775 [2024-12-15 05:57:09.395509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.775 [2024-12-15 05:57:09.399719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.775 [2024-12-15 05:57:09.400046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.775 [2024-12-15 05:57:09.400076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.775 [2024-12-15 05:57:09.404416] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.775 [2024-12-15 05:57:09.404754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.775 [2024-12-15 05:57:09.404786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.775 [2024-12-15 05:57:09.409502] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:47.775 [2024-12-15 05:57:09.409844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.775 [2024-12-15 05:57:09.409869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.035 [2024-12-15 05:57:09.414320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.035 [2024-12-15 05:57:09.414667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.036 [2024-12-15 05:57:09.414701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.036 [2024-12-15 05:57:09.419540] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.036 [2024-12-15 05:57:09.419912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.036 [2024-12-15 05:57:09.419953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.036 [2024-12-15 05:57:09.424331] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.036 [2024-12-15 05:57:09.424664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.036 [2024-12-15 05:57:09.424691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.036 [2024-12-15 05:57:09.428947] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.036 [2024-12-15 05:57:09.429286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.036 [2024-12-15 05:57:09.429318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.036 [2024-12-15 05:57:09.433664] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.036 [2024-12-15 05:57:09.434006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.036 [2024-12-15 05:57:09.434039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.036 [2024-12-15 05:57:09.438452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.036 [2024-12-15 05:57:09.438828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.036 [2024-12-15 05:57:09.438862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.036 [2024-12-15 05:57:09.443422] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.036 [2024-12-15 05:57:09.443795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.036 [2024-12-15 05:57:09.443828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.036 [2024-12-15 05:57:09.448121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.036 [2024-12-15 05:57:09.448434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.036 [2024-12-15 05:57:09.448464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.036 [2024-12-15 05:57:09.452789] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.036 [2024-12-15 05:57:09.453118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.036 [2024-12-15 05:57:09.453149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.036 [2024-12-15 05:57:09.457553] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.036 [2024-12-15 05:57:09.457882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.036 [2024-12-15 05:57:09.457922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.036 [2024-12-15 05:57:09.462303] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.036 [2024-12-15 05:57:09.462645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.036 [2024-12-15 05:57:09.462676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.036 [2024-12-15 05:57:09.467100] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.036 [2024-12-15 05:57:09.467471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.036 [2024-12-15 05:57:09.467503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.036 [2024-12-15 05:57:09.471994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.036 [2024-12-15 05:57:09.472319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.036 [2024-12-15 05:57:09.472350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.036 [2024-12-15 05:57:09.476878] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.036 [2024-12-15 05:57:09.477216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.036 [2024-12-15 05:57:09.477253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.036 [2024-12-15 05:57:09.481522] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.036 [2024-12-15 05:57:09.481846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.036 [2024-12-15 05:57:09.481885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.036 [2024-12-15 05:57:09.486431] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.036 [2024-12-15 05:57:09.486809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.036 [2024-12-15 05:57:09.486841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.036 [2024-12-15 05:57:09.491636] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.036 [2024-12-15 05:57:09.492006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.036 [2024-12-15 05:57:09.492050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.036 [2024-12-15 05:57:09.496714] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.036 [2024-12-15 05:57:09.497124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.036 [2024-12-15 05:57:09.497162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.036 [2024-12-15 05:57:09.502098] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.036 [2024-12-15 05:57:09.502415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.036 [2024-12-15 05:57:09.502446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.036 [2024-12-15 05:57:09.507724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.036 [2024-12-15 05:57:09.508137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.036 [2024-12-15 05:57:09.508169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.036 [2024-12-15 05:57:09.513301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.036 [2024-12-15 05:57:09.513667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.036 [2024-12-15 05:57:09.513700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.036 [2024-12-15 05:57:09.518556] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.036 [2024-12-15 05:57:09.518909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.036 [2024-12-15 05:57:09.518968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.036 [2024-12-15 05:57:09.524007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.036 [2024-12-15 05:57:09.524324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.036 [2024-12-15 05:57:09.524353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.036 [2024-12-15 05:57:09.529483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.036 [2024-12-15 05:57:09.529851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.036 [2024-12-15 05:57:09.529892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.036 [2024-12-15 05:57:09.534967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.036 [2024-12-15 05:57:09.535376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.037 [2024-12-15 05:57:09.535410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.037 [2024-12-15 05:57:09.540711] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.037 [2024-12-15 05:57:09.541090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.037 [2024-12-15 05:57:09.541121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.037 [2024-12-15 05:57:09.546367] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.037 [2024-12-15 05:57:09.546725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.037 [2024-12-15 05:57:09.546758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.037 [2024-12-15 05:57:09.551603] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.037 [2024-12-15 05:57:09.551963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.037 [2024-12-15 05:57:09.552026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.037 [2024-12-15 05:57:09.556732] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.037 [2024-12-15 05:57:09.557125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.037 [2024-12-15 05:57:09.557156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.037 [2024-12-15 05:57:09.561695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.037 [2024-12-15 05:57:09.562060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.037 [2024-12-15 05:57:09.562091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.037 [2024-12-15 05:57:09.566554] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.037 [2024-12-15 05:57:09.566911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.037 [2024-12-15 05:57:09.566967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.037 [2024-12-15 05:57:09.571649] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.037 [2024-12-15 05:57:09.571975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.037 [2024-12-15 05:57:09.572020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.037 [2024-12-15 05:57:09.576529] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.037 [2024-12-15 05:57:09.576863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.037 [2024-12-15 05:57:09.576905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.037 [2024-12-15 05:57:09.581439] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.037 [2024-12-15 05:57:09.581798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.037 [2024-12-15 05:57:09.581830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.037 [2024-12-15 05:57:09.586321] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.037 [2024-12-15 05:57:09.586652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.037 [2024-12-15 05:57:09.586684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.037 [2024-12-15 05:57:09.591051] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.037 [2024-12-15 05:57:09.591431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.037 [2024-12-15 05:57:09.591469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.037 [2024-12-15 05:57:09.595851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.037 [2024-12-15 05:57:09.596225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.037 [2024-12-15 05:57:09.596257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.037 [2024-12-15 05:57:09.600731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.037 [2024-12-15 05:57:09.601079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.037 [2024-12-15 05:57:09.601111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.037 [2024-12-15 05:57:09.605507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa8fe30) with pdu=0x2000190fef90 00:16:48.037 [2024-12-15 05:57:09.605840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.037 [2024-12-15 05:57:09.605881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.037 00:16:48.037 Latency(us) 00:16:48.037 [2024-12-15T05:57:09.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.037 [2024-12-15T05:57:09.678Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:48.037 nvme0n1 : 2.00 6284.48 785.56 0.00 0.00 2540.74 1951.19 5957.82 00:16:48.037 [2024-12-15T05:57:09.678Z] =================================================================================================================== 00:16:48.037 [2024-12-15T05:57:09.678Z] Total : 6284.48 785.56 0.00 0.00 2540.74 1951.19 5957.82 00:16:48.037 0 00:16:48.037 05:57:09 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:48.037 05:57:09 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:48.037 05:57:09 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:48.037 | .driver_specific 00:16:48.037 | .nvme_error 00:16:48.037 | .status_code 00:16:48.037 | .command_transient_transport_error' 00:16:48.037 05:57:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:48.296 05:57:09 -- host/digest.sh@71 -- # (( 405 > 0 )) 00:16:48.296 05:57:09 -- host/digest.sh@73 -- # killprocess 83655 00:16:48.296 05:57:09 -- common/autotest_common.sh@936 -- # '[' -z 83655 ']' 00:16:48.296 05:57:09 -- common/autotest_common.sh@940 -- # kill -0 83655 00:16:48.296 05:57:09 -- common/autotest_common.sh@941 -- # uname 00:16:48.296 05:57:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:48.296 05:57:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83655 00:16:48.296 05:57:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:48.296 05:57:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:48.296 05:57:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83655' 00:16:48.296 killing process with pid 83655 00:16:48.296 Received shutdown signal, test time was about 2.000000 seconds 00:16:48.296 00:16:48.296 Latency(us) 00:16:48.296 [2024-12-15T05:57:09.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.296 [2024-12-15T05:57:09.937Z] =================================================================================================================== 00:16:48.296 [2024-12-15T05:57:09.937Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:48.296 05:57:09 -- common/autotest_common.sh@955 -- # kill 83655 00:16:48.296 05:57:09 -- common/autotest_common.sh@960 -- # wait 83655 00:16:48.556 05:57:10 -- host/digest.sh@115 -- # killprocess 83460 00:16:48.556 05:57:10 -- common/autotest_common.sh@936 -- # '[' -z 83460 ']' 00:16:48.556 05:57:10 -- common/autotest_common.sh@940 -- # kill -0 83460 00:16:48.556 05:57:10 -- common/autotest_common.sh@941 -- # uname 00:16:48.556 05:57:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:48.556 05:57:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83460 00:16:48.556 05:57:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:48.556 killing process with pid 83460 00:16:48.556 05:57:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:48.556 05:57:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83460' 00:16:48.556 05:57:10 -- common/autotest_common.sh@955 -- # kill 83460 00:16:48.556 05:57:10 -- common/autotest_common.sh@960 -- # wait 83460 00:16:48.815 00:16:48.815 real 0m15.813s 00:16:48.815 user 0m30.994s 00:16:48.815 sys 0m4.417s 00:16:48.815 ************************************ 00:16:48.815 END TEST nvmf_digest_error 00:16:48.815 ************************************ 00:16:48.815 05:57:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:48.815 05:57:10 -- common/autotest_common.sh@10 -- # set +x 00:16:48.815 05:57:10 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:16:48.815 05:57:10 -- host/digest.sh@139 -- # nvmftestfini 00:16:48.815 05:57:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:48.815 05:57:10 -- nvmf/common.sh@116 -- # sync 00:16:48.815 05:57:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:48.815 05:57:10 -- nvmf/common.sh@119 -- # set +e 00:16:48.815 05:57:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:48.815 05:57:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:48.815 rmmod nvme_tcp 00:16:48.815 rmmod nvme_fabrics 00:16:48.815 rmmod nvme_keyring 00:16:48.815 05:57:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:48.815 05:57:10 -- nvmf/common.sh@123 -- # set -e 00:16:48.815 05:57:10 -- nvmf/common.sh@124 -- # return 0 00:16:48.815 05:57:10 -- nvmf/common.sh@477 -- # '[' -n 83460 ']' 00:16:48.815 05:57:10 -- nvmf/common.sh@478 -- # killprocess 83460 00:16:48.815 05:57:10 -- common/autotest_common.sh@936 -- # '[' -z 83460 ']' 00:16:48.815 05:57:10 -- common/autotest_common.sh@940 -- # kill -0 83460 00:16:48.815 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (83460) - No such process 00:16:48.815 Process with pid 83460 is not found 00:16:48.815 05:57:10 -- common/autotest_common.sh@963 -- # echo 'Process with pid 83460 is not found' 00:16:48.815 05:57:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:48.815 05:57:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:48.815 05:57:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:48.815 05:57:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:48.815 05:57:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:48.815 05:57:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.815 05:57:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:48.815 05:57:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.815 05:57:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:48.815 00:16:48.815 real 0m31.711s 00:16:48.815 user 0m59.951s 00:16:48.815 sys 0m8.946s 00:16:48.815 05:57:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:48.815 ************************************ 00:16:48.815 END TEST nvmf_digest 00:16:48.815 ************************************ 00:16:48.815 05:57:10 -- common/autotest_common.sh@10 -- # set +x 00:16:49.075 05:57:10 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:16:49.075 05:57:10 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:16:49.075 05:57:10 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:16:49.075 05:57:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:49.075 05:57:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:49.075 05:57:10 -- common/autotest_common.sh@10 -- # set +x 00:16:49.075 ************************************ 00:16:49.075 START TEST nvmf_multipath 00:16:49.075 ************************************ 00:16:49.075 05:57:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:16:49.075 * Looking for test storage... 00:16:49.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:49.075 05:57:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:49.075 05:57:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:49.075 05:57:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:49.075 05:57:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:49.075 05:57:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:49.075 05:57:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:49.075 05:57:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:49.075 05:57:10 -- scripts/common.sh@335 -- # IFS=.-: 00:16:49.075 05:57:10 -- scripts/common.sh@335 -- # read -ra ver1 00:16:49.075 05:57:10 -- scripts/common.sh@336 -- # IFS=.-: 00:16:49.075 05:57:10 -- scripts/common.sh@336 -- # read -ra ver2 00:16:49.075 05:57:10 -- scripts/common.sh@337 -- # local 'op=<' 00:16:49.075 05:57:10 -- scripts/common.sh@339 -- # ver1_l=2 00:16:49.075 05:57:10 -- scripts/common.sh@340 -- # ver2_l=1 00:16:49.075 05:57:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:49.075 05:57:10 -- scripts/common.sh@343 -- # case "$op" in 00:16:49.075 05:57:10 -- scripts/common.sh@344 -- # : 1 00:16:49.075 05:57:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:49.075 05:57:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:49.075 05:57:10 -- scripts/common.sh@364 -- # decimal 1 00:16:49.075 05:57:10 -- scripts/common.sh@352 -- # local d=1 00:16:49.075 05:57:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:49.076 05:57:10 -- scripts/common.sh@354 -- # echo 1 00:16:49.076 05:57:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:49.076 05:57:10 -- scripts/common.sh@365 -- # decimal 2 00:16:49.076 05:57:10 -- scripts/common.sh@352 -- # local d=2 00:16:49.076 05:57:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:49.076 05:57:10 -- scripts/common.sh@354 -- # echo 2 00:16:49.076 05:57:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:49.076 05:57:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:49.076 05:57:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:49.076 05:57:10 -- scripts/common.sh@367 -- # return 0 00:16:49.076 05:57:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:49.076 05:57:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:49.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.076 --rc genhtml_branch_coverage=1 00:16:49.076 --rc genhtml_function_coverage=1 00:16:49.076 --rc genhtml_legend=1 00:16:49.076 --rc geninfo_all_blocks=1 00:16:49.076 --rc geninfo_unexecuted_blocks=1 00:16:49.076 00:16:49.076 ' 00:16:49.076 05:57:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:49.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.076 --rc genhtml_branch_coverage=1 00:16:49.076 --rc genhtml_function_coverage=1 00:16:49.076 --rc genhtml_legend=1 00:16:49.076 --rc geninfo_all_blocks=1 00:16:49.076 --rc geninfo_unexecuted_blocks=1 00:16:49.076 00:16:49.076 ' 00:16:49.076 05:57:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:49.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.076 --rc genhtml_branch_coverage=1 00:16:49.076 --rc genhtml_function_coverage=1 00:16:49.076 --rc genhtml_legend=1 00:16:49.076 --rc geninfo_all_blocks=1 00:16:49.076 --rc geninfo_unexecuted_blocks=1 00:16:49.076 00:16:49.076 ' 00:16:49.076 05:57:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:49.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.076 --rc genhtml_branch_coverage=1 00:16:49.076 --rc genhtml_function_coverage=1 00:16:49.076 --rc genhtml_legend=1 00:16:49.076 --rc geninfo_all_blocks=1 00:16:49.076 --rc geninfo_unexecuted_blocks=1 00:16:49.076 00:16:49.076 ' 00:16:49.076 05:57:10 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:49.076 05:57:10 -- nvmf/common.sh@7 -- # uname -s 00:16:49.076 05:57:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.076 05:57:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.076 05:57:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.076 05:57:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.076 05:57:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.076 05:57:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.076 05:57:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.076 05:57:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.076 05:57:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.076 05:57:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.076 05:57:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:16:49.076 05:57:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:16:49.076 05:57:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.076 05:57:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.076 05:57:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:49.076 05:57:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:49.076 05:57:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.076 05:57:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.076 05:57:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.076 05:57:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.076 05:57:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.076 05:57:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.076 05:57:10 -- paths/export.sh@5 -- # export PATH 00:16:49.076 05:57:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.076 05:57:10 -- nvmf/common.sh@46 -- # : 0 00:16:49.076 05:57:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:49.076 05:57:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:49.076 05:57:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:49.076 05:57:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.076 05:57:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.076 05:57:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:49.076 05:57:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:49.076 05:57:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:49.076 05:57:10 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:49.076 05:57:10 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:49.076 05:57:10 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:49.076 05:57:10 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:49.076 05:57:10 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:49.076 05:57:10 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:49.076 05:57:10 -- host/multipath.sh@30 -- # nvmftestinit 00:16:49.076 05:57:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:49.076 05:57:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:49.076 05:57:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:49.076 05:57:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:49.076 05:57:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:49.076 05:57:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.076 05:57:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.076 05:57:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.076 05:57:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:49.076 05:57:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:49.076 05:57:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:49.076 05:57:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:49.076 05:57:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:49.076 05:57:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:49.076 05:57:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:49.076 05:57:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:49.076 05:57:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:49.076 05:57:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:49.076 05:57:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:49.076 05:57:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:49.076 05:57:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:49.076 05:57:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:49.076 05:57:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:49.076 05:57:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:49.076 05:57:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:49.076 05:57:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:49.076 05:57:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:49.076 05:57:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:49.335 Cannot find device "nvmf_tgt_br" 00:16:49.335 05:57:10 -- nvmf/common.sh@154 -- # true 00:16:49.335 05:57:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:49.335 Cannot find device "nvmf_tgt_br2" 00:16:49.335 05:57:10 -- nvmf/common.sh@155 -- # true 00:16:49.335 05:57:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:49.335 05:57:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:49.335 Cannot find device "nvmf_tgt_br" 00:16:49.335 05:57:10 -- nvmf/common.sh@157 -- # true 00:16:49.335 05:57:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:49.335 Cannot find device "nvmf_tgt_br2" 00:16:49.335 05:57:10 -- nvmf/common.sh@158 -- # true 00:16:49.335 05:57:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:49.335 05:57:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:49.335 05:57:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:49.335 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:49.335 05:57:10 -- nvmf/common.sh@161 -- # true 00:16:49.335 05:57:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:49.335 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:49.335 05:57:10 -- nvmf/common.sh@162 -- # true 00:16:49.335 05:57:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:49.335 05:57:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:49.335 05:57:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:49.335 05:57:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:49.335 05:57:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:49.335 05:57:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:49.335 05:57:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:49.336 05:57:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:49.336 05:57:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:49.336 05:57:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:49.336 05:57:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:49.336 05:57:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:49.336 05:57:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:49.336 05:57:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:49.336 05:57:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:49.336 05:57:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:49.336 05:57:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:49.336 05:57:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:49.336 05:57:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:49.336 05:57:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:49.595 05:57:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:49.595 05:57:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:49.595 05:57:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:49.595 05:57:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:49.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:49.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:16:49.595 00:16:49.595 --- 10.0.0.2 ping statistics --- 00:16:49.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.595 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:16:49.595 05:57:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:49.595 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:49.595 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:16:49.595 00:16:49.595 --- 10.0.0.3 ping statistics --- 00:16:49.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.595 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:16:49.595 05:57:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:49.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:49.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:49.595 00:16:49.595 --- 10.0.0.1 ping statistics --- 00:16:49.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.595 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:49.595 05:57:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:49.595 05:57:11 -- nvmf/common.sh@421 -- # return 0 00:16:49.595 05:57:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:49.595 05:57:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:49.595 05:57:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:49.595 05:57:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:49.595 05:57:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:49.595 05:57:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:49.595 05:57:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:49.595 05:57:11 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:16:49.595 05:57:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:49.595 05:57:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:49.596 05:57:11 -- common/autotest_common.sh@10 -- # set +x 00:16:49.596 05:57:11 -- nvmf/common.sh@469 -- # nvmfpid=83916 00:16:49.596 05:57:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:49.596 05:57:11 -- nvmf/common.sh@470 -- # waitforlisten 83916 00:16:49.596 05:57:11 -- common/autotest_common.sh@829 -- # '[' -z 83916 ']' 00:16:49.596 05:57:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.596 05:57:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:49.596 05:57:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.596 05:57:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:49.596 05:57:11 -- common/autotest_common.sh@10 -- # set +x 00:16:49.596 [2024-12-15 05:57:11.086320] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:49.596 [2024-12-15 05:57:11.086436] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.596 [2024-12-15 05:57:11.227021] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:49.855 [2024-12-15 05:57:11.260268] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:49.855 [2024-12-15 05:57:11.260619] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.855 [2024-12-15 05:57:11.260760] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.855 [2024-12-15 05:57:11.260822] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.855 [2024-12-15 05:57:11.261047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.855 [2024-12-15 05:57:11.261057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.793 05:57:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:50.793 05:57:12 -- common/autotest_common.sh@862 -- # return 0 00:16:50.793 05:57:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:50.793 05:57:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:50.793 05:57:12 -- common/autotest_common.sh@10 -- # set +x 00:16:50.793 05:57:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:50.793 05:57:12 -- host/multipath.sh@33 -- # nvmfapp_pid=83916 00:16:50.793 05:57:12 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:50.793 [2024-12-15 05:57:12.359497] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:50.793 05:57:12 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:51.360 Malloc0 00:16:51.360 05:57:12 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:51.619 05:57:13 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:51.878 05:57:13 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:51.878 [2024-12-15 05:57:13.503053] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.137 05:57:13 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:52.137 [2024-12-15 05:57:13.731266] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:52.137 05:57:13 -- host/multipath.sh@44 -- # bdevperf_pid=83970 00:16:52.137 05:57:13 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:52.137 05:57:13 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:52.137 05:57:13 -- host/multipath.sh@47 -- # waitforlisten 83970 /var/tmp/bdevperf.sock 00:16:52.137 05:57:13 -- common/autotest_common.sh@829 -- # '[' -z 83970 ']' 00:16:52.137 05:57:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:52.137 05:57:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:52.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:52.137 05:57:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:52.137 05:57:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:52.137 05:57:13 -- common/autotest_common.sh@10 -- # set +x 00:16:53.074 05:57:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:53.074 05:57:14 -- common/autotest_common.sh@862 -- # return 0 00:16:53.074 05:57:14 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:53.333 05:57:14 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:16:53.901 Nvme0n1 00:16:53.901 05:57:15 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:54.160 Nvme0n1 00:16:54.160 05:57:15 -- host/multipath.sh@78 -- # sleep 1 00:16:54.160 05:57:15 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:55.096 05:57:16 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:16:55.096 05:57:16 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:55.355 05:57:16 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:55.615 05:57:17 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:16:55.615 05:57:17 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83916 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:55.615 05:57:17 -- host/multipath.sh@65 -- # dtrace_pid=84021 00:16:55.615 05:57:17 -- host/multipath.sh@66 -- # sleep 6 00:17:02.188 05:57:23 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:02.188 05:57:23 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:02.188 05:57:23 -- host/multipath.sh@67 -- # active_port=4421 00:17:02.188 05:57:23 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:02.188 Attaching 4 probes... 00:17:02.188 @path[10.0.0.2, 4421]: 19442 00:17:02.188 @path[10.0.0.2, 4421]: 20039 00:17:02.188 @path[10.0.0.2, 4421]: 20103 00:17:02.188 @path[10.0.0.2, 4421]: 19856 00:17:02.188 @path[10.0.0.2, 4421]: 20155 00:17:02.188 05:57:23 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:02.188 05:57:23 -- host/multipath.sh@69 -- # sed -n 1p 00:17:02.188 05:57:23 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:02.188 05:57:23 -- host/multipath.sh@69 -- # port=4421 00:17:02.188 05:57:23 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:02.188 05:57:23 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:02.188 05:57:23 -- host/multipath.sh@72 -- # kill 84021 00:17:02.188 05:57:23 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:02.188 05:57:23 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:17:02.188 05:57:23 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:02.188 05:57:23 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:02.447 05:57:23 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:17:02.448 05:57:23 -- host/multipath.sh@65 -- # dtrace_pid=84134 00:17:02.448 05:57:23 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83916 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:02.448 05:57:23 -- host/multipath.sh@66 -- # sleep 6 00:17:09.013 05:57:30 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:09.013 05:57:30 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:09.013 05:57:30 -- host/multipath.sh@67 -- # active_port=4420 00:17:09.013 05:57:30 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:09.013 Attaching 4 probes... 00:17:09.013 @path[10.0.0.2, 4420]: 19326 00:17:09.013 @path[10.0.0.2, 4420]: 19553 00:17:09.013 @path[10.0.0.2, 4420]: 19813 00:17:09.013 @path[10.0.0.2, 4420]: 19977 00:17:09.013 @path[10.0.0.2, 4420]: 20221 00:17:09.013 05:57:30 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:09.013 05:57:30 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:09.013 05:57:30 -- host/multipath.sh@69 -- # sed -n 1p 00:17:09.013 05:57:30 -- host/multipath.sh@69 -- # port=4420 00:17:09.013 05:57:30 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:09.013 05:57:30 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:09.013 05:57:30 -- host/multipath.sh@72 -- # kill 84134 00:17:09.013 05:57:30 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:09.013 05:57:30 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:17:09.013 05:57:30 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:09.013 05:57:30 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:09.271 05:57:30 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:17:09.271 05:57:30 -- host/multipath.sh@65 -- # dtrace_pid=84248 00:17:09.271 05:57:30 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83916 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:09.271 05:57:30 -- host/multipath.sh@66 -- # sleep 6 00:17:15.869 05:57:36 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:15.869 05:57:36 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:15.869 05:57:36 -- host/multipath.sh@67 -- # active_port=4421 00:17:15.869 05:57:36 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:15.869 Attaching 4 probes... 00:17:15.869 @path[10.0.0.2, 4421]: 14211 00:17:15.869 @path[10.0.0.2, 4421]: 19704 00:17:15.869 @path[10.0.0.2, 4421]: 19865 00:17:15.869 @path[10.0.0.2, 4421]: 19615 00:17:15.869 @path[10.0.0.2, 4421]: 19626 00:17:15.869 05:57:36 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:15.869 05:57:36 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:15.869 05:57:36 -- host/multipath.sh@69 -- # sed -n 1p 00:17:15.869 05:57:37 -- host/multipath.sh@69 -- # port=4421 00:17:15.869 05:57:37 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:15.869 05:57:37 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:15.869 05:57:37 -- host/multipath.sh@72 -- # kill 84248 00:17:15.869 05:57:37 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:15.869 05:57:37 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:17:15.869 05:57:37 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:15.869 05:57:37 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:16.128 05:57:37 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:17:16.128 05:57:37 -- host/multipath.sh@65 -- # dtrace_pid=84360 00:17:16.128 05:57:37 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83916 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:16.128 05:57:37 -- host/multipath.sh@66 -- # sleep 6 00:17:22.692 05:57:43 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:22.692 05:57:43 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:17:22.692 05:57:43 -- host/multipath.sh@67 -- # active_port= 00:17:22.692 05:57:43 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:22.692 Attaching 4 probes... 00:17:22.692 00:17:22.692 00:17:22.692 00:17:22.692 00:17:22.692 00:17:22.692 05:57:43 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:22.692 05:57:43 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:22.692 05:57:43 -- host/multipath.sh@69 -- # sed -n 1p 00:17:22.692 05:57:43 -- host/multipath.sh@69 -- # port= 00:17:22.692 05:57:43 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:17:22.692 05:57:43 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:17:22.692 05:57:43 -- host/multipath.sh@72 -- # kill 84360 00:17:22.692 05:57:43 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:22.692 05:57:43 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:17:22.692 05:57:43 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:22.692 05:57:44 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:22.951 05:57:44 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:17:22.951 05:57:44 -- host/multipath.sh@65 -- # dtrace_pid=84479 00:17:22.951 05:57:44 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83916 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:22.951 05:57:44 -- host/multipath.sh@66 -- # sleep 6 00:17:29.517 05:57:50 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:29.517 05:57:50 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:29.517 05:57:50 -- host/multipath.sh@67 -- # active_port=4421 00:17:29.517 05:57:50 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:29.517 Attaching 4 probes... 00:17:29.517 @path[10.0.0.2, 4421]: 19108 00:17:29.517 @path[10.0.0.2, 4421]: 19238 00:17:29.517 @path[10.0.0.2, 4421]: 19261 00:17:29.517 @path[10.0.0.2, 4421]: 19278 00:17:29.517 @path[10.0.0.2, 4421]: 19278 00:17:29.517 05:57:50 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:29.517 05:57:50 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:29.517 05:57:50 -- host/multipath.sh@69 -- # sed -n 1p 00:17:29.517 05:57:50 -- host/multipath.sh@69 -- # port=4421 00:17:29.517 05:57:50 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:29.517 05:57:50 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:29.517 05:57:50 -- host/multipath.sh@72 -- # kill 84479 00:17:29.517 05:57:50 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:29.517 05:57:50 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:29.517 [2024-12-15 05:57:50.946449] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947152] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947283] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947383] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947459] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947554] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947571] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947581] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947589] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947597] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947605] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947613] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947621] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947629] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947637] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947645] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947653] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947661] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947669] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947677] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947685] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947693] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947701] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947709] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947717] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947724] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947732] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947740] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947748] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947756] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.517 [2024-12-15 05:57:50.947764] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.518 [2024-12-15 05:57:50.947772] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.518 [2024-12-15 05:57:50.947779] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.518 [2024-12-15 05:57:50.947788] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.518 [2024-12-15 05:57:50.947795] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.518 [2024-12-15 05:57:50.947803] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.518 [2024-12-15 05:57:50.947811] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.518 [2024-12-15 05:57:50.947819] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.518 [2024-12-15 05:57:50.947828] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.518 [2024-12-15 05:57:50.947836] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.518 [2024-12-15 05:57:50.947844] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.518 [2024-12-15 05:57:50.947852] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.518 [2024-12-15 05:57:50.947860] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.518 [2024-12-15 05:57:50.947868] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.518 [2024-12-15 05:57:50.947876] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.518 [2024-12-15 05:57:50.947896] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.518 [2024-12-15 05:57:50.947906] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.518 [2024-12-15 05:57:50.947914] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.518 [2024-12-15 05:57:50.947922] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.518 [2024-12-15 05:57:50.947930] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.518 [2024-12-15 05:57:50.947938] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.518 [2024-12-15 05:57:50.947947] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.518 [2024-12-15 05:57:50.947955] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efd7a0 is same with the state(5) to be set 00:17:29.518 05:57:50 -- host/multipath.sh@101 -- # sleep 1 00:17:30.452 05:57:51 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:17:30.452 05:57:51 -- host/multipath.sh@65 -- # dtrace_pid=84608 00:17:30.452 05:57:51 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83916 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:30.453 05:57:51 -- host/multipath.sh@66 -- # sleep 6 00:17:37.019 05:57:57 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:37.019 05:57:57 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:37.019 05:57:58 -- host/multipath.sh@67 -- # active_port=4420 00:17:37.019 05:57:58 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:37.019 Attaching 4 probes... 00:17:37.019 @path[10.0.0.2, 4420]: 18630 00:17:37.019 @path[10.0.0.2, 4420]: 19240 00:17:37.019 @path[10.0.0.2, 4420]: 19074 00:17:37.019 @path[10.0.0.2, 4420]: 18997 00:17:37.019 @path[10.0.0.2, 4420]: 18603 00:17:37.019 05:57:58 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:37.019 05:57:58 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:37.019 05:57:58 -- host/multipath.sh@69 -- # sed -n 1p 00:17:37.019 05:57:58 -- host/multipath.sh@69 -- # port=4420 00:17:37.019 05:57:58 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:37.019 05:57:58 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:37.019 05:57:58 -- host/multipath.sh@72 -- # kill 84608 00:17:37.019 05:57:58 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:37.019 05:57:58 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:37.019 [2024-12-15 05:57:58.472372] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:37.019 05:57:58 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:37.278 05:57:58 -- host/multipath.sh@111 -- # sleep 6 00:17:43.844 05:58:04 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:17:43.844 05:58:04 -- host/multipath.sh@65 -- # dtrace_pid=84782 00:17:43.844 05:58:04 -- host/multipath.sh@66 -- # sleep 6 00:17:43.844 05:58:04 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83916 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:50.417 05:58:10 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:50.417 05:58:10 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:50.417 05:58:11 -- host/multipath.sh@67 -- # active_port=4421 00:17:50.417 05:58:11 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:50.417 Attaching 4 probes... 00:17:50.417 @path[10.0.0.2, 4421]: 18749 00:17:50.417 @path[10.0.0.2, 4421]: 19222 00:17:50.417 @path[10.0.0.2, 4421]: 19168 00:17:50.417 @path[10.0.0.2, 4421]: 19245 00:17:50.417 @path[10.0.0.2, 4421]: 19770 00:17:50.417 05:58:11 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:50.417 05:58:11 -- host/multipath.sh@69 -- # sed -n 1p 00:17:50.417 05:58:11 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:50.417 05:58:11 -- host/multipath.sh@69 -- # port=4421 00:17:50.417 05:58:11 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:50.417 05:58:11 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:50.417 05:58:11 -- host/multipath.sh@72 -- # kill 84782 00:17:50.417 05:58:11 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:50.417 05:58:11 -- host/multipath.sh@114 -- # killprocess 83970 00:17:50.417 05:58:11 -- common/autotest_common.sh@936 -- # '[' -z 83970 ']' 00:17:50.417 05:58:11 -- common/autotest_common.sh@940 -- # kill -0 83970 00:17:50.417 05:58:11 -- common/autotest_common.sh@941 -- # uname 00:17:50.417 05:58:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:50.417 05:58:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83970 00:17:50.417 05:58:11 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:50.417 05:58:11 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:50.417 killing process with pid 83970 00:17:50.417 05:58:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83970' 00:17:50.417 05:58:11 -- common/autotest_common.sh@955 -- # kill 83970 00:17:50.417 05:58:11 -- common/autotest_common.sh@960 -- # wait 83970 00:17:50.417 Connection closed with partial response: 00:17:50.417 00:17:50.417 00:17:50.417 05:58:11 -- host/multipath.sh@116 -- # wait 83970 00:17:50.417 05:58:11 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:50.417 [2024-12-15 05:57:13.793193] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:50.417 [2024-12-15 05:57:13.793297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83970 ] 00:17:50.417 [2024-12-15 05:57:13.929448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.417 [2024-12-15 05:57:13.969123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:50.417 Running I/O for 90 seconds... 00:17:50.417 [2024-12-15 05:57:23.979622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.417 [2024-12-15 05:57:23.979703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:50.417 [2024-12-15 05:57:23.979774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.417 [2024-12-15 05:57:23.979795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:50.417 [2024-12-15 05:57:23.979817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.417 [2024-12-15 05:57:23.979832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:50.417 [2024-12-15 05:57:23.979852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.417 [2024-12-15 05:57:23.979866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:50.417 [2024-12-15 05:57:23.979899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.417 [2024-12-15 05:57:23.979916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:50.417 [2024-12-15 05:57:23.979937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-12-15 05:57:23.979953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.417 [2024-12-15 05:57:23.979973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-12-15 05:57:23.979987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:50.417 [2024-12-15 05:57:23.980007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:92800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-12-15 05:57:23.980021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:50.417 [2024-12-15 05:57:23.980040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:92808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-12-15 05:57:23.980054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:50.417 [2024-12-15 05:57:23.980074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-12-15 05:57:23.980088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:50.417 [2024-12-15 05:57:23.980108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-12-15 05:57:23.980142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:50.417 [2024-12-15 05:57:23.980165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-12-15 05:57:23.980180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:50.417 [2024-12-15 05:57:23.980200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:92856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-12-15 05:57:23.980214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:50.417 [2024-12-15 05:57:23.980234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-12-15 05:57:23.980247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:50.417 [2024-12-15 05:57:23.980267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.417 [2024-12-15 05:57:23.980281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:50.417 [2024-12-15 05:57:23.980301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.417 [2024-12-15 05:57:23.980315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.980335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-12-15 05:57:23.980349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.980368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-12-15 05:57:23.980383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.980402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:93464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.418 [2024-12-15 05:57:23.980416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.980437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-12-15 05:57:23.980452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.980471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.418 [2024-12-15 05:57:23.980486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.980524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.418 [2024-12-15 05:57:23.980544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.980565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:93496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.418 [2024-12-15 05:57:23.980580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.980611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-12-15 05:57:23.980627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.980646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:93512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.418 [2024-12-15 05:57:23.980661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.980681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:93520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.418 [2024-12-15 05:57:23.980695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.980715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-12-15 05:57:23.980730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.980750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-12-15 05:57:23.980764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.980784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-12-15 05:57:23.980798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.980818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:93552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.418 [2024-12-15 05:57:23.980832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.980852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-12-15 05:57:23.980866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.980904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:92880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-12-15 05:57:23.980921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.980941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-12-15 05:57:23.980955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.980975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-12-15 05:57:23.980989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.981009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-12-15 05:57:23.981024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.981052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:92920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-12-15 05:57:23.981067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.981087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-12-15 05:57:23.981102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.981122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-12-15 05:57:23.981137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.981157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-12-15 05:57:23.981171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.981191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:93568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.418 [2024-12-15 05:57:23.981206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.981226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:93576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.418 [2024-12-15 05:57:23.981240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.981261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:93584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.418 [2024-12-15 05:57:23.981275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.981295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:93592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-12-15 05:57:23.981309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.981333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:93600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.418 [2024-12-15 05:57:23.981348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.981368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:93608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.418 [2024-12-15 05:57:23.981383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.981403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:93616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-12-15 05:57:23.981417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.981437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-12-15 05:57:23.981451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.981471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-12-15 05:57:23.981492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.981513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-12-15 05:57:23.981528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.981548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-12-15 05:57:23.981562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.981581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.418 [2024-12-15 05:57:23.981596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.981616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-12-15 05:57:23.981630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.981650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:93672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.418 [2024-12-15 05:57:23.981664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.981684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.418 [2024-12-15 05:57:23.981698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:50.418 [2024-12-15 05:57:23.981718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:93688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.418 [2024-12-15 05:57:23.981733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.981753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-12-15 05:57:23.981767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.981787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-12-15 05:57:23.981802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.981822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-12-15 05:57:23.981836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.981855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-12-15 05:57:23.981896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.981920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-12-15 05:57:23.981942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.981964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:93120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-12-15 05:57:23.981979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.982000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-12-15 05:57:23.982015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.982035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-12-15 05:57:23.982050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.982070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:93696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.419 [2024-12-15 05:57:23.982085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.982105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-12-15 05:57:23.982120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.982141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:93712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.419 [2024-12-15 05:57:23.982155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.982177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.419 [2024-12-15 05:57:23.982191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.982212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-12-15 05:57:23.982226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.982247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-12-15 05:57:23.982262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.982324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:93744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.419 [2024-12-15 05:57:23.982344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.982365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-12-15 05:57:23.982380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.982400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.419 [2024-12-15 05:57:23.982415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.982444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-12-15 05:57:23.982460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.982480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-12-15 05:57:23.982494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.982515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-12-15 05:57:23.982529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.982549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-12-15 05:57:23.982563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.982583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:93800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.419 [2024-12-15 05:57:23.982597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.982617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:93808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.419 [2024-12-15 05:57:23.982631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.982651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-12-15 05:57:23.982665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.982685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.419 [2024-12-15 05:57:23.982699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.982720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.419 [2024-12-15 05:57:23.982734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.982754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-12-15 05:57:23.982768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.982788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-12-15 05:57:23.982802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.982822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-12-15 05:57:23.982836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.982863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-12-15 05:57:23.982893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.982916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-12-15 05:57:23.982951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.982972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-12-15 05:57:23.982987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.983012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-12-15 05:57:23.983027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.983047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-12-15 05:57:23.983062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.983082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-12-15 05:57:23.983096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.984881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-12-15 05:57:23.984914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.984944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-12-15 05:57:23.984977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.985002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.419 [2024-12-15 05:57:23.985018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.985040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.419 [2024-12-15 05:57:23.985055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:50.419 [2024-12-15 05:57:23.985077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.420 [2024-12-15 05:57:23.985107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.985128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.420 [2024-12-15 05:57:23.985143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.985164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.420 [2024-12-15 05:57:23.985190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.985213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.420 [2024-12-15 05:57:23.985229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.985250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.420 [2024-12-15 05:57:23.985265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.985286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.420 [2024-12-15 05:57:23.985301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.985322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.420 [2024-12-15 05:57:23.985337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.985358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.420 [2024-12-15 05:57:23.985377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.985399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.420 [2024-12-15 05:57:23.985415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.985450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:93952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.420 [2024-12-15 05:57:23.985464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.985484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.420 [2024-12-15 05:57:23.985499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.985519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.420 [2024-12-15 05:57:23.985534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.985873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.420 [2024-12-15 05:57:23.985928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.985957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.420 [2024-12-15 05:57:23.985973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.985995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.420 [2024-12-15 05:57:23.986020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.986044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.420 [2024-12-15 05:57:23.986059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.986080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.420 [2024-12-15 05:57:23.986096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.986117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:93312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.420 [2024-12-15 05:57:23.986132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.986153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.420 [2024-12-15 05:57:23.986168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.986189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.420 [2024-12-15 05:57:23.986204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.986226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.420 [2024-12-15 05:57:23.986240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.986261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.420 [2024-12-15 05:57:23.986277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.986298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.420 [2024-12-15 05:57:23.986313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.986334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.420 [2024-12-15 05:57:23.986353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.986375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.420 [2024-12-15 05:57:23.986390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.986411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.420 [2024-12-15 05:57:23.986426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.986447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.420 [2024-12-15 05:57:23.986462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.986491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.420 [2024-12-15 05:57:23.986507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.986529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.420 [2024-12-15 05:57:23.986544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.986565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.420 [2024-12-15 05:57:23.986580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.986602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.420 [2024-12-15 05:57:23.986617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.986638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.420 [2024-12-15 05:57:23.986653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.986674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.420 [2024-12-15 05:57:23.986690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:23.986711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.420 [2024-12-15 05:57:23.986726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:30.463917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.420 [2024-12-15 05:57:30.464008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:30.464064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.420 [2024-12-15 05:57:30.464085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:30.464106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:121264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.420 [2024-12-15 05:57:30.464121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:30.464142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:121272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.420 [2024-12-15 05:57:30.464156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:30.464176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.420 [2024-12-15 05:57:30.464190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:30.464226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.420 [2024-12-15 05:57:30.464242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.420 [2024-12-15 05:57:30.464276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.421 [2024-12-15 05:57:30.464305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.464323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.421 [2024-12-15 05:57:30.464337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.464356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:121312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.421 [2024-12-15 05:57:30.464369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.464387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.421 [2024-12-15 05:57:30.464400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.464418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.421 [2024-12-15 05:57:30.464431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.464450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.421 [2024-12-15 05:57:30.464462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.464481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.421 [2024-12-15 05:57:30.464494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.464512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.421 [2024-12-15 05:57:30.464525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.464543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.421 [2024-12-15 05:57:30.464556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.464574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.421 [2024-12-15 05:57:30.464587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.464606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.421 [2024-12-15 05:57:30.464619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.464647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.421 [2024-12-15 05:57:30.464662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.464681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.421 [2024-12-15 05:57:30.464695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.464714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.421 [2024-12-15 05:57:30.464728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.464746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.421 [2024-12-15 05:57:30.464760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.464778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:121352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.421 [2024-12-15 05:57:30.464792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.464811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.421 [2024-12-15 05:57:30.464824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.464847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.421 [2024-12-15 05:57:30.464862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.464881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.421 [2024-12-15 05:57:30.464894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.464927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.421 [2024-12-15 05:57:30.464942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.464960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.421 [2024-12-15 05:57:30.464974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.464992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.421 [2024-12-15 05:57:30.465006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.465024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.421 [2024-12-15 05:57:30.465038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.465056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:121416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.421 [2024-12-15 05:57:30.465078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.465098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.421 [2024-12-15 05:57:30.465112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.465130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.421 [2024-12-15 05:57:30.465144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.465163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.421 [2024-12-15 05:57:30.465176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.465196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.421 [2024-12-15 05:57:30.465209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.465228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.421 [2024-12-15 05:57:30.465241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.465260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:120776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.421 [2024-12-15 05:57:30.465273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.465292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.421 [2024-12-15 05:57:30.465305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.465324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.421 [2024-12-15 05:57:30.465337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.465356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.421 [2024-12-15 05:57:30.465369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:50.421 [2024-12-15 05:57:30.465388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.422 [2024-12-15 05:57:30.465401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.465420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.422 [2024-12-15 05:57:30.465433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.465452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.465472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.465492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:121456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.465506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.465524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.422 [2024-12-15 05:57:30.465538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.465557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:121472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.465570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.465770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:121480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.422 [2024-12-15 05:57:30.465796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.465823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.465839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.465862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.422 [2024-12-15 05:57:30.465876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.465914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:121504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.422 [2024-12-15 05:57:30.465931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.465954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:121512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.465969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.465991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.422 [2024-12-15 05:57:30.466005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.466041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.466055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.466077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.422 [2024-12-15 05:57:30.466090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.466112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.466141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.466175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.466190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.466212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.466226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.466249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.466263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.466285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.466299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.466321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.466335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.466357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.466371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.466393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.466407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.466429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:121544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.466444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.466465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:121552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.466480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.466502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:121560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.466531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.466553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:121568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.466566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.466588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.466605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.466635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:121584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.466649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.466671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.422 [2024-12-15 05:57:30.466685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.466706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:121600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.466719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.466741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:121608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.466755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.466776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:121616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.466789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.466811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.466825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.466846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.466860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.466881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.466895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.466926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.466943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.466965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.466979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.467001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:121000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.467015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.467036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:121008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.467050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.467079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:121024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.422 [2024-12-15 05:57:30.467105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:50.422 [2024-12-15 05:57:30.467128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.423 [2024-12-15 05:57:30.467168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.467208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:121632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.423 [2024-12-15 05:57:30.467224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.467247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:121640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.423 [2024-12-15 05:57:30.467263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.467287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.423 [2024-12-15 05:57:30.467301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.467324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.423 [2024-12-15 05:57:30.467338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.467361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.423 [2024-12-15 05:57:30.467375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.467398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:121672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.423 [2024-12-15 05:57:30.467413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.467436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.423 [2024-12-15 05:57:30.467451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.467503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.423 [2024-12-15 05:57:30.467520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.467557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.423 [2024-12-15 05:57:30.467572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.467593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.423 [2024-12-15 05:57:30.467607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.467628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:121712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.423 [2024-12-15 05:57:30.467650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.467673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:121720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.423 [2024-12-15 05:57:30.467687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.467709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:121728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.423 [2024-12-15 05:57:30.467722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.467744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.423 [2024-12-15 05:57:30.467758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.467779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:121744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.423 [2024-12-15 05:57:30.467793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.467814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.423 [2024-12-15 05:57:30.467828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.467850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:121056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.423 [2024-12-15 05:57:30.467864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.467885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:121072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.423 [2024-12-15 05:57:30.467900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.467922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:121088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.423 [2024-12-15 05:57:30.467950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.467974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.423 [2024-12-15 05:57:30.467988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.468010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:121112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.423 [2024-12-15 05:57:30.468023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.468045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:121136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.423 [2024-12-15 05:57:30.468058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.468080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:121144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.423 [2024-12-15 05:57:30.468101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.468124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:121152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.423 [2024-12-15 05:57:30.468137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.468159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.423 [2024-12-15 05:57:30.468173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.468195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.423 [2024-12-15 05:57:30.468208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.468230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.423 [2024-12-15 05:57:30.468243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.468265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.423 [2024-12-15 05:57:30.468278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.468300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.423 [2024-12-15 05:57:30.468313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.468334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.423 [2024-12-15 05:57:30.468348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.468373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.423 [2024-12-15 05:57:30.468387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.468408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:121816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.423 [2024-12-15 05:57:30.468422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.468443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:121824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.423 [2024-12-15 05:57:30.468457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.468479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.423 [2024-12-15 05:57:30.468494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.468516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.423 [2024-12-15 05:57:30.468536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.468559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:121848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.423 [2024-12-15 05:57:30.468573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.468594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.423 [2024-12-15 05:57:30.468608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.468629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.423 [2024-12-15 05:57:30.468643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:50.423 [2024-12-15 05:57:30.468665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.423 [2024-12-15 05:57:30.468678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:30.468699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:121192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.424 [2024-12-15 05:57:30.468713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:30.468735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:121200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.424 [2024-12-15 05:57:30.468748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:30.468770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.424 [2024-12-15 05:57:30.468783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:30.468805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:121216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.424 [2024-12-15 05:57:30.468818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:30.468840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:121232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.424 [2024-12-15 05:57:30.468853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:30.468886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:121240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.424 [2024-12-15 05:57:30.468902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:30.468924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.424 [2024-12-15 05:57:30.468938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:30.468962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.424 [2024-12-15 05:57:30.468977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:30.469006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.424 [2024-12-15 05:57:30.469020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.530987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.424 [2024-12-15 05:57:37.531045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.531112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.424 [2024-12-15 05:57:37.531136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.531201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.424 [2024-12-15 05:57:37.531217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.531239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.424 [2024-12-15 05:57:37.531254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.531275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.424 [2024-12-15 05:57:37.531291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.531312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.424 [2024-12-15 05:57:37.531327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.531348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.424 [2024-12-15 05:57:37.531363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.531384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.424 [2024-12-15 05:57:37.531399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.531420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.424 [2024-12-15 05:57:37.531435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.531456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.424 [2024-12-15 05:57:37.531471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.531506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.424 [2024-12-15 05:57:37.531521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.531589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.424 [2024-12-15 05:57:37.531609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.531629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.424 [2024-12-15 05:57:37.531644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.531663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.424 [2024-12-15 05:57:37.531684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.531703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.424 [2024-12-15 05:57:37.531718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.531737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.424 [2024-12-15 05:57:37.531750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.531770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.424 [2024-12-15 05:57:37.531784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.531806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.424 [2024-12-15 05:57:37.531819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.531838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.424 [2024-12-15 05:57:37.531852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.531871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.424 [2024-12-15 05:57:37.531885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.531904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.424 [2024-12-15 05:57:37.531917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.531948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.424 [2024-12-15 05:57:37.531965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.531985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.424 [2024-12-15 05:57:37.531999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.532018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.424 [2024-12-15 05:57:37.532042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.532063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.424 [2024-12-15 05:57:37.532077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.532097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.424 [2024-12-15 05:57:37.532111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.532130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.424 [2024-12-15 05:57:37.532144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.532164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.424 [2024-12-15 05:57:37.532177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.532197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.424 [2024-12-15 05:57:37.532211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.532230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.424 [2024-12-15 05:57:37.532244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:50.424 [2024-12-15 05:57:37.532264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.424 [2024-12-15 05:57:37.532277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.532297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.425 [2024-12-15 05:57:37.532310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.532330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.425 [2024-12-15 05:57:37.532343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.532363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.425 [2024-12-15 05:57:37.532378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.532397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.425 [2024-12-15 05:57:37.532412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.532435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.425 [2024-12-15 05:57:37.532456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.532477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.425 [2024-12-15 05:57:37.532491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.532510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.425 [2024-12-15 05:57:37.532524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.532544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.425 [2024-12-15 05:57:37.532557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.532577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.425 [2024-12-15 05:57:37.532590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.532628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.425 [2024-12-15 05:57:37.532642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.532663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.425 [2024-12-15 05:57:37.532677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.532697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.425 [2024-12-15 05:57:37.532711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.532731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.425 [2024-12-15 05:57:37.532745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.532765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.425 [2024-12-15 05:57:37.532779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.532800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.425 [2024-12-15 05:57:37.532814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.532834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.425 [2024-12-15 05:57:37.532848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.532868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.425 [2024-12-15 05:57:37.532898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.532942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.425 [2024-12-15 05:57:37.532959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.532980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.425 [2024-12-15 05:57:37.532995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.533016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.425 [2024-12-15 05:57:37.533031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.533053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.425 [2024-12-15 05:57:37.533068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.533088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.425 [2024-12-15 05:57:37.533103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.533123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.425 [2024-12-15 05:57:37.533138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.533158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.425 [2024-12-15 05:57:37.533173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.533194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.425 [2024-12-15 05:57:37.533208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.533229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.425 [2024-12-15 05:57:37.533243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.533264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.425 [2024-12-15 05:57:37.533294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.533314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.425 [2024-12-15 05:57:37.533328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.533348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.425 [2024-12-15 05:57:37.533362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.533389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.425 [2024-12-15 05:57:37.533405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.533425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.425 [2024-12-15 05:57:37.533440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.533459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.425 [2024-12-15 05:57:37.533474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.533494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.425 [2024-12-15 05:57:37.533508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.533528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.425 [2024-12-15 05:57:37.533542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.533564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.425 [2024-12-15 05:57:37.533578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.533598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.425 [2024-12-15 05:57:37.533612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.533632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.425 [2024-12-15 05:57:37.533647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.533670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.425 [2024-12-15 05:57:37.533685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.533705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.425 [2024-12-15 05:57:37.533719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:50.425 [2024-12-15 05:57:37.533739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.426 [2024-12-15 05:57:37.533752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.533772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.426 [2024-12-15 05:57:37.533787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.533806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.426 [2024-12-15 05:57:37.533828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.533849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.426 [2024-12-15 05:57:37.533863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.533892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.426 [2024-12-15 05:57:37.533909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.533946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.426 [2024-12-15 05:57:37.533961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.533997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.426 [2024-12-15 05:57:37.534013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.534034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.426 [2024-12-15 05:57:37.534049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.534070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.426 [2024-12-15 05:57:37.534086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.534108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.426 [2024-12-15 05:57:37.534123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.534144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.426 [2024-12-15 05:57:37.534159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.534182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.426 [2024-12-15 05:57:37.534197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.534218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.426 [2024-12-15 05:57:37.534234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.534255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.426 [2024-12-15 05:57:37.534271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.534292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.426 [2024-12-15 05:57:37.534329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.534351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.426 [2024-12-15 05:57:37.534381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.534401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.426 [2024-12-15 05:57:37.534416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.534436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.426 [2024-12-15 05:57:37.534450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.534470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.426 [2024-12-15 05:57:37.534484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.534504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.426 [2024-12-15 05:57:37.534519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.534538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.426 [2024-12-15 05:57:37.534553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.534572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.426 [2024-12-15 05:57:37.534587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.534607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.426 [2024-12-15 05:57:37.534621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.534641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.426 [2024-12-15 05:57:37.534656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.534680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.426 [2024-12-15 05:57:37.534695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.534715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.426 [2024-12-15 05:57:37.534729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.534749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.426 [2024-12-15 05:57:37.534764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.534791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.426 [2024-12-15 05:57:37.534809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.534829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.426 [2024-12-15 05:57:37.534844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.534865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.426 [2024-12-15 05:57:37.534879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.534899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.426 [2024-12-15 05:57:37.534913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.534943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.426 [2024-12-15 05:57:37.534961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.534982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.426 [2024-12-15 05:57:37.534997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.535957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.426 [2024-12-15 05:57:37.535985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.536018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.426 [2024-12-15 05:57:37.536035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:50.426 [2024-12-15 05:57:37.536063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.426 [2024-12-15 05:57:37.536078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:37.536105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.427 [2024-12-15 05:57:37.536120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:37.536147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.427 [2024-12-15 05:57:37.536162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:37.536190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.427 [2024-12-15 05:57:37.536205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:37.536244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.427 [2024-12-15 05:57:37.536260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:37.536290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.427 [2024-12-15 05:57:37.536306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:37.536333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.427 [2024-12-15 05:57:37.536348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:37.536374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.427 [2024-12-15 05:57:37.536389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:37.536416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.427 [2024-12-15 05:57:37.536433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:37.536460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.427 [2024-12-15 05:57:37.536475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:37.536502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.427 [2024-12-15 05:57:37.536516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:37.536544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.427 [2024-12-15 05:57:37.536558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:37.536600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.427 [2024-12-15 05:57:37.536619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:37.536647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.427 [2024-12-15 05:57:37.536662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:37.536690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.427 [2024-12-15 05:57:37.536704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:37.536731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.427 [2024-12-15 05:57:37.536746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:37.536782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.427 [2024-12-15 05:57:37.536798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:37.536825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.427 [2024-12-15 05:57:37.536840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:37.536866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.427 [2024-12-15 05:57:37.536895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:37.536924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.427 [2024-12-15 05:57:37.536939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:37.536966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.427 [2024-12-15 05:57:37.536981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:37.537009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.427 [2024-12-15 05:57:37.537024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:37.537051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.427 [2024-12-15 05:57:37.537065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:50.948034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.427 [2024-12-15 05:57:50.948080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:50.948106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.427 [2024-12-15 05:57:50.948121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:50.948136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.427 [2024-12-15 05:57:50.948149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:50.948164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.427 [2024-12-15 05:57:50.948177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:50.948192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.427 [2024-12-15 05:57:50.948205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:50.948220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.427 [2024-12-15 05:57:50.948250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:50.948266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.427 [2024-12-15 05:57:50.948280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:50.948294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.427 [2024-12-15 05:57:50.948307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:50.948322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.427 [2024-12-15 05:57:50.948336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:50.948350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.427 [2024-12-15 05:57:50.948363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:50.948378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.427 [2024-12-15 05:57:50.948391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:50.948405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.427 [2024-12-15 05:57:50.948418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:50.948433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.427 [2024-12-15 05:57:50.948445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:50.948460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.427 [2024-12-15 05:57:50.948473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:50.948487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.427 [2024-12-15 05:57:50.948500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:50.948515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.427 [2024-12-15 05:57:50.948528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:50.948542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.427 [2024-12-15 05:57:50.948557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.427 [2024-12-15 05:57:50.948572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.427 [2024-12-15 05:57:50.948585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.948608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.948622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.948637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.948650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.948664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.948677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.948692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.948705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.948720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.948733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.948747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.948760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.948775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.948788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.948802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.948816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.948831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.948843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.948858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.948882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.948900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.428 [2024-12-15 05:57:50.948914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.948929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.948942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.948956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.428 [2024-12-15 05:57:50.948976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.948992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.428 [2024-12-15 05:57:50.949006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.949020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.949034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.949049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.949062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.949077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.428 [2024-12-15 05:57:50.949090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.949105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.949118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.949133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.949147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.949161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.949174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.949189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.949201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.949216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.428 [2024-12-15 05:57:50.949229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.949244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.949257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.949272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.949284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.949299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.949312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.949326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.949345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.949361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.428 [2024-12-15 05:57:50.949374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.949388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.428 [2024-12-15 05:57:50.949401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.949416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.949429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.949444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.949457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.949472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.949485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.949501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.949513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.949528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.949541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.949556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.949569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.949583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.949596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.949611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.949624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.949639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.428 [2024-12-15 05:57:50.949652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.949666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.428 [2024-12-15 05:57:50.949679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.949700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.428 [2024-12-15 05:57:50.949714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.949728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.428 [2024-12-15 05:57:50.949741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.428 [2024-12-15 05:57:50.949756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.428 [2024-12-15 05:57:50.949769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.949784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.429 [2024-12-15 05:57:50.949797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.949812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.429 [2024-12-15 05:57:50.949825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.949840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.949854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.949878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.949895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.949910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.429 [2024-12-15 05:57:50.949923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.949938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.949953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.949968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.949982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.949996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.429 [2024-12-15 05:57:50.950009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.950037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.950071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.950100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.950129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.950157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.429 [2024-12-15 05:57:50.950185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.950212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.950239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.950267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.950294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.950322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.950350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.950378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.950407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.950441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.950469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.429 [2024-12-15 05:57:50.950497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.950524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.950552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.429 [2024-12-15 05:57:50.950580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.950607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.429 [2024-12-15 05:57:50.950635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.950662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.950707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.950736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.429 [2024-12-15 05:57:50.950764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.950798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.950829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.950860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.950901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.950931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.429 [2024-12-15 05:57:50.950946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.429 [2024-12-15 05:57:50.950960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.950975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.430 [2024-12-15 05:57:50.950988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.951003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.430 [2024-12-15 05:57:50.951016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.951032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.430 [2024-12-15 05:57:50.951045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.951060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.430 [2024-12-15 05:57:50.951074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.951089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.430 [2024-12-15 05:57:50.951102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.951117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.430 [2024-12-15 05:57:50.951131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.951156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.430 [2024-12-15 05:57:50.951170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.951186] nvme_qp 05:58:11 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:50.430 air.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.430 [2024-12-15 05:57:50.951210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.951226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.430 [2024-12-15 05:57:50.951240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.951256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.430 [2024-12-15 05:57:50.951269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.951284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.430 [2024-12-15 05:57:50.951297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.951315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.430 [2024-12-15 05:57:50.951329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.951345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.430 [2024-12-15 05:57:50.951361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.951377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.430 [2024-12-15 05:57:50.951391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.951406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.430 [2024-12-15 05:57:50.951420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.951435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.430 [2024-12-15 05:57:50.951449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.951464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.430 [2024-12-15 05:57:50.951477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.951492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.430 [2024-12-15 05:57:50.951506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.951536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.430 [2024-12-15 05:57:50.951549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.951564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.430 [2024-12-15 05:57:50.951577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.951597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.430 [2024-12-15 05:57:50.951611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.951626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.430 [2024-12-15 05:57:50.951639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.951654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.430 [2024-12-15 05:57:50.951667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.951682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.430 [2024-12-15 05:57:50.951694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.951709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.430 [2024-12-15 05:57:50.951722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.951736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.430 [2024-12-15 05:57:50.951750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.951764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.430 [2024-12-15 05:57:50.951777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.951794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.430 [2024-12-15 05:57:50.951808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.951822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x837100 is same with the state(5) to be set 00:17:50.430 [2024-12-15 05:57:50.951839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:50.430 [2024-12-15 05:57:50.951850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:50.430 [2024-12-15 05:57:50.951861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98280 len:8 PRP1 0x0 PRP2 0x0 00:17:50.430 [2024-12-15 05:57:50.951874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.430 [2024-12-15 05:57:50.951930] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x837100 was disconnected and freed. reset controller. 00:17:50.430 [2024-12-15 05:57:50.953035] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:50.430 [2024-12-15 05:57:50.953121] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8463c0 (9): Bad file descriptor 00:17:50.430 [2024-12-15 05:57:50.953424] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:50.430 [2024-12-15 05:57:50.953511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:50.430 [2024-12-15 05:57:50.953563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:50.430 [2024-12-15 05:57:50.953597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8463c0 with addr=10.0.0.2, port=4421 00:17:50.430 [2024-12-15 05:57:50.953616] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8463c0 is same with the state(5) to be set 00:17:50.430 [2024-12-15 05:57:50.953652] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8463c0 (9): Bad file descriptor 00:17:50.430 [2024-12-15 05:57:50.953685] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:50.430 [2024-12-15 05:57:50.953703] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:50.430 [2024-12-15 05:57:50.953719] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:50.430 [2024-12-15 05:57:50.953751] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:50.430 [2024-12-15 05:57:50.953769] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:50.431 [2024-12-15 05:58:01.017446] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:50.431 Received shutdown signal, test time was about 55.398623 seconds 00:17:50.431 00:17:50.431 Latency(us) 00:17:50.431 [2024-12-15T05:58:12.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.431 [2024-12-15T05:58:12.072Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:50.431 Verification LBA range: start 0x0 length 0x4000 00:17:50.431 Nvme0n1 : 55.40 11060.92 43.21 0.00 0.00 11552.74 841.54 7046430.72 00:17:50.431 [2024-12-15T05:58:12.072Z] =================================================================================================================== 00:17:50.431 [2024-12-15T05:58:12.072Z] Total : 11060.92 43.21 0.00 0.00 11552.74 841.54 7046430.72 00:17:50.431 05:58:11 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:17:50.431 05:58:11 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:50.431 05:58:11 -- host/multipath.sh@125 -- # nvmftestfini 00:17:50.431 05:58:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:50.431 05:58:11 -- nvmf/common.sh@116 -- # sync 00:17:50.431 05:58:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:50.431 05:58:11 -- nvmf/common.sh@119 -- # set +e 00:17:50.431 05:58:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:50.431 05:58:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:50.431 rmmod nvme_tcp 00:17:50.431 rmmod nvme_fabrics 00:17:50.431 rmmod nvme_keyring 00:17:50.431 05:58:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:50.431 05:58:11 -- nvmf/common.sh@123 -- # set -e 00:17:50.431 05:58:11 -- nvmf/common.sh@124 -- # return 0 00:17:50.431 05:58:11 -- nvmf/common.sh@477 -- # '[' -n 83916 ']' 00:17:50.431 05:58:11 -- nvmf/common.sh@478 -- # killprocess 83916 00:17:50.431 05:58:11 -- common/autotest_common.sh@936 -- # '[' -z 83916 ']' 00:17:50.431 05:58:11 -- common/autotest_common.sh@940 -- # kill -0 83916 00:17:50.431 05:58:11 -- common/autotest_common.sh@941 -- # uname 00:17:50.431 05:58:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:50.431 05:58:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83916 00:17:50.431 05:58:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:50.431 05:58:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:50.431 killing process with pid 83916 00:17:50.431 05:58:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83916' 00:17:50.431 05:58:11 -- common/autotest_common.sh@955 -- # kill 83916 00:17:50.431 05:58:11 -- common/autotest_common.sh@960 -- # wait 83916 00:17:50.431 05:58:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:50.431 05:58:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:50.431 05:58:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:50.431 05:58:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:50.431 05:58:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:50.431 05:58:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.431 05:58:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.431 05:58:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.431 05:58:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:50.431 00:17:50.431 real 1m1.329s 00:17:50.431 user 2m50.275s 00:17:50.431 sys 0m17.966s 00:17:50.431 05:58:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:50.431 ************************************ 00:17:50.431 END TEST nvmf_multipath 00:17:50.431 05:58:11 -- common/autotest_common.sh@10 -- # set +x 00:17:50.431 ************************************ 00:17:50.431 05:58:11 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:17:50.431 05:58:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:50.431 05:58:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:50.431 05:58:11 -- common/autotest_common.sh@10 -- # set +x 00:17:50.431 ************************************ 00:17:50.431 START TEST nvmf_timeout 00:17:50.431 ************************************ 00:17:50.431 05:58:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:17:50.431 * Looking for test storage... 00:17:50.431 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:50.431 05:58:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:50.431 05:58:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:50.431 05:58:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:50.431 05:58:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:50.431 05:58:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:50.431 05:58:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:50.431 05:58:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:50.431 05:58:12 -- scripts/common.sh@335 -- # IFS=.-: 00:17:50.431 05:58:12 -- scripts/common.sh@335 -- # read -ra ver1 00:17:50.431 05:58:12 -- scripts/common.sh@336 -- # IFS=.-: 00:17:50.431 05:58:12 -- scripts/common.sh@336 -- # read -ra ver2 00:17:50.431 05:58:12 -- scripts/common.sh@337 -- # local 'op=<' 00:17:50.431 05:58:12 -- scripts/common.sh@339 -- # ver1_l=2 00:17:50.431 05:58:12 -- scripts/common.sh@340 -- # ver2_l=1 00:17:50.431 05:58:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:50.431 05:58:12 -- scripts/common.sh@343 -- # case "$op" in 00:17:50.431 05:58:12 -- scripts/common.sh@344 -- # : 1 00:17:50.431 05:58:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:50.431 05:58:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:50.431 05:58:12 -- scripts/common.sh@364 -- # decimal 1 00:17:50.431 05:58:12 -- scripts/common.sh@352 -- # local d=1 00:17:50.431 05:58:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:50.431 05:58:12 -- scripts/common.sh@354 -- # echo 1 00:17:50.431 05:58:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:50.431 05:58:12 -- scripts/common.sh@365 -- # decimal 2 00:17:50.431 05:58:12 -- scripts/common.sh@352 -- # local d=2 00:17:50.431 05:58:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:50.431 05:58:12 -- scripts/common.sh@354 -- # echo 2 00:17:50.431 05:58:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:50.431 05:58:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:50.431 05:58:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:50.431 05:58:12 -- scripts/common.sh@367 -- # return 0 00:17:50.431 05:58:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:50.431 05:58:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:50.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.431 --rc genhtml_branch_coverage=1 00:17:50.431 --rc genhtml_function_coverage=1 00:17:50.431 --rc genhtml_legend=1 00:17:50.431 --rc geninfo_all_blocks=1 00:17:50.431 --rc geninfo_unexecuted_blocks=1 00:17:50.431 00:17:50.431 ' 00:17:50.431 05:58:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:50.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.431 --rc genhtml_branch_coverage=1 00:17:50.431 --rc genhtml_function_coverage=1 00:17:50.431 --rc genhtml_legend=1 00:17:50.431 --rc geninfo_all_blocks=1 00:17:50.431 --rc geninfo_unexecuted_blocks=1 00:17:50.431 00:17:50.431 ' 00:17:50.431 05:58:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:50.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.431 --rc genhtml_branch_coverage=1 00:17:50.431 --rc genhtml_function_coverage=1 00:17:50.431 --rc genhtml_legend=1 00:17:50.431 --rc geninfo_all_blocks=1 00:17:50.431 --rc geninfo_unexecuted_blocks=1 00:17:50.431 00:17:50.431 ' 00:17:50.431 05:58:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:50.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.431 --rc genhtml_branch_coverage=1 00:17:50.431 --rc genhtml_function_coverage=1 00:17:50.431 --rc genhtml_legend=1 00:17:50.431 --rc geninfo_all_blocks=1 00:17:50.431 --rc geninfo_unexecuted_blocks=1 00:17:50.431 00:17:50.431 ' 00:17:50.431 05:58:12 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:50.431 05:58:12 -- nvmf/common.sh@7 -- # uname -s 00:17:50.431 05:58:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:50.431 05:58:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:50.431 05:58:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:50.431 05:58:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:50.431 05:58:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:50.431 05:58:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:50.431 05:58:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:50.431 05:58:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:50.431 05:58:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:50.431 05:58:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:50.691 05:58:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:17:50.691 05:58:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:17:50.691 05:58:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:50.691 05:58:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:50.691 05:58:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:50.691 05:58:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:50.691 05:58:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.691 05:58:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.691 05:58:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.691 05:58:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.691 05:58:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.691 05:58:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.691 05:58:12 -- paths/export.sh@5 -- # export PATH 00:17:50.691 05:58:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.691 05:58:12 -- nvmf/common.sh@46 -- # : 0 00:17:50.691 05:58:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:50.691 05:58:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:50.691 05:58:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:50.691 05:58:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:50.691 05:58:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:50.691 05:58:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:50.691 05:58:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:50.691 05:58:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:50.691 05:58:12 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:50.691 05:58:12 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:50.691 05:58:12 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:50.691 05:58:12 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:50.691 05:58:12 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:50.691 05:58:12 -- host/timeout.sh@19 -- # nvmftestinit 00:17:50.691 05:58:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:50.691 05:58:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:50.691 05:58:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:50.691 05:58:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:50.691 05:58:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:50.691 05:58:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.691 05:58:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.691 05:58:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.691 05:58:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:50.691 05:58:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:50.691 05:58:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:50.691 05:58:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:50.691 05:58:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:50.691 05:58:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:50.691 05:58:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:50.691 05:58:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:50.691 05:58:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:50.691 05:58:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:50.691 05:58:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:50.691 05:58:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:50.691 05:58:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:50.691 05:58:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.691 05:58:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:50.691 05:58:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:50.691 05:58:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:50.691 05:58:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:50.691 05:58:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:50.691 05:58:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:50.691 Cannot find device "nvmf_tgt_br" 00:17:50.691 05:58:12 -- nvmf/common.sh@154 -- # true 00:17:50.691 05:58:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:50.691 Cannot find device "nvmf_tgt_br2" 00:17:50.691 05:58:12 -- nvmf/common.sh@155 -- # true 00:17:50.691 05:58:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:50.691 05:58:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:50.691 Cannot find device "nvmf_tgt_br" 00:17:50.691 05:58:12 -- nvmf/common.sh@157 -- # true 00:17:50.691 05:58:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:50.691 Cannot find device "nvmf_tgt_br2" 00:17:50.691 05:58:12 -- nvmf/common.sh@158 -- # true 00:17:50.691 05:58:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:50.691 05:58:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:50.691 05:58:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:50.691 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:50.691 05:58:12 -- nvmf/common.sh@161 -- # true 00:17:50.691 05:58:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:50.691 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:50.691 05:58:12 -- nvmf/common.sh@162 -- # true 00:17:50.691 05:58:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:50.691 05:58:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:50.691 05:58:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:50.691 05:58:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:50.691 05:58:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:50.691 05:58:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:50.691 05:58:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:50.691 05:58:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:50.691 05:58:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:50.691 05:58:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:50.691 05:58:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:50.691 05:58:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:50.691 05:58:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:50.691 05:58:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:50.691 05:58:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:50.691 05:58:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:50.691 05:58:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:50.691 05:58:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:50.691 05:58:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:50.691 05:58:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:50.691 05:58:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:50.951 05:58:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:50.951 05:58:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:50.951 05:58:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:50.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:17:50.951 00:17:50.951 --- 10.0.0.2 ping statistics --- 00:17:50.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.951 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:17:50.951 05:58:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:50.951 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:50.951 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:17:50.951 00:17:50.951 --- 10.0.0.3 ping statistics --- 00:17:50.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.951 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:50.951 05:58:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:50.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:50.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:17:50.951 00:17:50.951 --- 10.0.0.1 ping statistics --- 00:17:50.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.951 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:50.951 05:58:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:50.951 05:58:12 -- nvmf/common.sh@421 -- # return 0 00:17:50.951 05:58:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:50.951 05:58:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:50.951 05:58:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:50.951 05:58:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:50.951 05:58:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:50.951 05:58:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:50.951 05:58:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:50.951 05:58:12 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:17:50.951 05:58:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:50.951 05:58:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:50.951 05:58:12 -- common/autotest_common.sh@10 -- # set +x 00:17:50.951 05:58:12 -- nvmf/common.sh@469 -- # nvmfpid=85099 00:17:50.951 05:58:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:50.951 05:58:12 -- nvmf/common.sh@470 -- # waitforlisten 85099 00:17:50.951 05:58:12 -- common/autotest_common.sh@829 -- # '[' -z 85099 ']' 00:17:50.951 05:58:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.951 05:58:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:50.951 05:58:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.951 05:58:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:50.951 05:58:12 -- common/autotest_common.sh@10 -- # set +x 00:17:50.951 [2024-12-15 05:58:12.426025] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:50.951 [2024-12-15 05:58:12.426104] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.951 [2024-12-15 05:58:12.558151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:51.211 [2024-12-15 05:58:12.592500] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:51.211 [2024-12-15 05:58:12.592649] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.211 [2024-12-15 05:58:12.592662] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.211 [2024-12-15 05:58:12.592671] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.211 [2024-12-15 05:58:12.592817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.211 [2024-12-15 05:58:12.592828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.211 05:58:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:51.211 05:58:12 -- common/autotest_common.sh@862 -- # return 0 00:17:51.211 05:58:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:51.211 05:58:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:51.211 05:58:12 -- common/autotest_common.sh@10 -- # set +x 00:17:51.211 05:58:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.211 05:58:12 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:51.211 05:58:12 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:51.470 [2024-12-15 05:58:13.001757] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.470 05:58:13 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:51.728 Malloc0 00:17:51.729 05:58:13 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:51.987 05:58:13 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:52.246 05:58:13 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:52.506 [2024-12-15 05:58:14.033275] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:52.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:52.506 05:58:14 -- host/timeout.sh@32 -- # bdevperf_pid=85140 00:17:52.506 05:58:14 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:17:52.506 05:58:14 -- host/timeout.sh@34 -- # waitforlisten 85140 /var/tmp/bdevperf.sock 00:17:52.506 05:58:14 -- common/autotest_common.sh@829 -- # '[' -z 85140 ']' 00:17:52.506 05:58:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:52.506 05:58:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.506 05:58:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:52.506 05:58:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.506 05:58:14 -- common/autotest_common.sh@10 -- # set +x 00:17:52.506 [2024-12-15 05:58:14.093660] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:52.506 [2024-12-15 05:58:14.093945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85140 ] 00:17:52.765 [2024-12-15 05:58:14.228695] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.765 [2024-12-15 05:58:14.267799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.699 05:58:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.699 05:58:15 -- common/autotest_common.sh@862 -- # return 0 00:17:53.699 05:58:15 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:53.699 05:58:15 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:17:54.266 NVMe0n1 00:17:54.266 05:58:15 -- host/timeout.sh@51 -- # rpc_pid=85164 00:17:54.266 05:58:15 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:54.266 05:58:15 -- host/timeout.sh@53 -- # sleep 1 00:17:54.266 Running I/O for 10 seconds... 00:17:55.203 05:58:16 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:55.475 [2024-12-15 05:58:16.909805] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.909859] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.909904] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.909930] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.909937] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.909945] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.909953] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.909960] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.909967] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.909975] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.909982] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.909990] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.909998] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.910021] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.910029] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.910036] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.910044] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.910052] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.910060] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.910067] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.910075] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.910082] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.910107] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.910115] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.910122] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.910130] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.910138] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.910146] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.910153] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.910161] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.910168] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.475 [2024-12-15 05:58:16.910176] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.476 [2024-12-15 05:58:16.910184] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.476 [2024-12-15 05:58:16.910192] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.476 [2024-12-15 05:58:16.910201] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.476 [2024-12-15 05:58:16.910209] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.476 [2024-12-15 05:58:16.910217] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.476 [2024-12-15 05:58:16.910225] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.476 [2024-12-15 05:58:16.910233] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.476 [2024-12-15 05:58:16.910240] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.476 [2024-12-15 05:58:16.910248] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936a60 is same with the state(5) to be set 00:17:55.476 [2024-12-15 05:58:16.910303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.910333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.910354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.910364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.910376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.910385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.910396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.910405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.910416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.910424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.910435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.910444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.910455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.910464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.910490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.910499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.910510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.910519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.910530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.910541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.910552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.910561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.910577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.910586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.910597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.910606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.910617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.910627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.910638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.910647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.910658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.910668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.910679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.910689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.910700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.910709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.910737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.910746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.910757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.910765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.910776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.910785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.910796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.910820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.910831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.476 [2024-12-15 05:58:16.910855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.910866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:126664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.476 [2024-12-15 05:58:16.910875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.910885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.910894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.910905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.910914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.910925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.910933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.910944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.910953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.910977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.910988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.910999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.911008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.911019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.911029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.911040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.911048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.911059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.476 [2024-12-15 05:58:16.911068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.911079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.911089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.911100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.476 [2024-12-15 05:58:16.911109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.911120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.911129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.911165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.911175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.911186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:126712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.476 [2024-12-15 05:58:16.911195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.911206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.476 [2024-12-15 05:58:16.911216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.911227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.476 [2024-12-15 05:58:16.911236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.911247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.911255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.911266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.476 [2024-12-15 05:58:16.911276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.911286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.476 [2024-12-15 05:58:16.911295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.911306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.911315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.911327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.911337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.911348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.911357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.911369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.911379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.911390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.911399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.911410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.911419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.911430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.911439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.911450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.911459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.911470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.911479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.911491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.911500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.911511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.911520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.911531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:126784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.911540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.911566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:126792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.476 [2024-12-15 05:58:16.911575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.476 [2024-12-15 05:58:16.911585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.476 [2024-12-15 05:58:16.911594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.911612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.911635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.911645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.911669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.911680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.911706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.911717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.477 [2024-12-15 05:58:16.911726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.911737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:126840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.477 [2024-12-15 05:58:16.911746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.911757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.477 [2024-12-15 05:58:16.911766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.911777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.911786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.911797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:126864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.477 [2024-12-15 05:58:16.911806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.911817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.477 [2024-12-15 05:58:16.911827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.911837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.477 [2024-12-15 05:58:16.911847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.911858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.911868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.911879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.911888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.911899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.911908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.911919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:126912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.477 [2024-12-15 05:58:16.911928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.911939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.911957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.911970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.911979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.911990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.911999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:126960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.477 [2024-12-15 05:58:16.912252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:126976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.477 [2024-12-15 05:58:16.912292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:126992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.477 [2024-12-15 05:58:16.912353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:127008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.477 [2024-12-15 05:58:16.912373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:127016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.477 [2024-12-15 05:58:16.912394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:127024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:127032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:127040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.477 [2024-12-15 05:58:16.912454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.477 [2024-12-15 05:58:16.912484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.477 [2024-12-15 05:58:16.912505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.477 [2024-12-15 05:58:16.912525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.477 [2024-12-15 05:58:16.912565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:127088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.477 [2024-12-15 05:58:16.912585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:127096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:127104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:127112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:127120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:127128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.477 [2024-12-15 05:58:16.912854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:127136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:127144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:127152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.477 [2024-12-15 05:58:16.912926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.477 [2024-12-15 05:58:16.912937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:127160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.478 [2024-12-15 05:58:16.912946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.478 [2024-12-15 05:58:16.912957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:127168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.478 [2024-12-15 05:58:16.912966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.478 [2024-12-15 05:58:16.912977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:127176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.478 [2024-12-15 05:58:16.912985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.478 [2024-12-15 05:58:16.912997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:127184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.478 [2024-12-15 05:58:16.913006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.478 [2024-12-15 05:58:16.913017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:127192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.478 [2024-12-15 05:58:16.913026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.478 [2024-12-15 05:58:16.913037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.478 [2024-12-15 05:58:16.913046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.478 [2024-12-15 05:58:16.913056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.478 [2024-12-15 05:58:16.913066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.478 [2024-12-15 05:58:16.913076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.478 [2024-12-15 05:58:16.913086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.478 [2024-12-15 05:58:16.913098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.478 [2024-12-15 05:58:16.913107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.478 [2024-12-15 05:58:16.913118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:127232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.478 [2024-12-15 05:58:16.913127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.478 [2024-12-15 05:58:16.913137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efb9a0 is same with the state(5) to be set 00:17:55.478 [2024-12-15 05:58:16.913153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.478 [2024-12-15 05:58:16.913161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.478 [2024-12-15 05:58:16.913170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127240 len:8 PRP1 0x0 PRP2 0x0 00:17:55.478 [2024-12-15 05:58:16.913178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.478 [2024-12-15 05:58:16.913220] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1efb9a0 was disconnected and freed. reset controller. 00:17:55.478 [2024-12-15 05:58:16.913311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.478 [2024-12-15 05:58:16.913327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.478 [2024-12-15 05:58:16.913337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.478 [2024-12-15 05:58:16.913347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.478 [2024-12-15 05:58:16.913356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.478 [2024-12-15 05:58:16.913365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.478 [2024-12-15 05:58:16.913376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.478 [2024-12-15 05:58:16.913384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.478 [2024-12-15 05:58:16.913393] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f00610 is same with the state(5) to be set 00:17:55.478 [2024-12-15 05:58:16.913615] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:55.478 [2024-12-15 05:58:16.913636] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f00610 (9): Bad file descriptor 00:17:55.478 [2024-12-15 05:58:16.913745] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:55.478 [2024-12-15 05:58:16.913808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:55.478 [2024-12-15 05:58:16.913850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:55.478 [2024-12-15 05:58:16.913865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f00610 with addr=10.0.0.2, port=4420 00:17:55.478 [2024-12-15 05:58:16.913893] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f00610 is same with the state(5) to be set 00:17:55.478 [2024-12-15 05:58:16.913914] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f00610 (9): Bad file descriptor 00:17:55.478 [2024-12-15 05:58:16.913930] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:55.478 [2024-12-15 05:58:16.913940] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:55.478 [2024-12-15 05:58:16.913950] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:55.478 [2024-12-15 05:58:16.927358] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:55.478 [2024-12-15 05:58:16.927401] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:55.478 05:58:16 -- host/timeout.sh@56 -- # sleep 2 00:17:57.406 [2024-12-15 05:58:18.927578] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:57.406 [2024-12-15 05:58:18.927910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:57.406 [2024-12-15 05:58:18.927965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:57.406 [2024-12-15 05:58:18.927983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f00610 with addr=10.0.0.2, port=4420 00:17:57.406 [2024-12-15 05:58:18.927997] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f00610 is same with the state(5) to be set 00:17:57.406 [2024-12-15 05:58:18.928031] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f00610 (9): Bad file descriptor 00:17:57.406 [2024-12-15 05:58:18.928051] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:57.406 [2024-12-15 05:58:18.928061] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:57.406 [2024-12-15 05:58:18.928071] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:57.406 [2024-12-15 05:58:18.928098] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:57.406 [2024-12-15 05:58:18.928110] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:57.406 05:58:18 -- host/timeout.sh@57 -- # get_controller 00:17:57.406 05:58:18 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:57.406 05:58:18 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:17:57.665 05:58:19 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:17:57.665 05:58:19 -- host/timeout.sh@58 -- # get_bdev 00:17:57.665 05:58:19 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:17:57.665 05:58:19 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:17:57.923 05:58:19 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:17:57.923 05:58:19 -- host/timeout.sh@61 -- # sleep 5 00:17:59.300 [2024-12-15 05:58:20.928219] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:59.300 [2024-12-15 05:58:20.928327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:59.300 [2024-12-15 05:58:20.928369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:59.300 [2024-12-15 05:58:20.928384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f00610 with addr=10.0.0.2, port=4420 00:17:59.300 [2024-12-15 05:58:20.928396] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f00610 is same with the state(5) to be set 00:17:59.300 [2024-12-15 05:58:20.928421] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f00610 (9): Bad file descriptor 00:17:59.300 [2024-12-15 05:58:20.928440] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:59.300 [2024-12-15 05:58:20.928449] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:59.300 [2024-12-15 05:58:20.928459] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:59.300 [2024-12-15 05:58:20.928484] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:59.300 [2024-12-15 05:58:20.928495] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:01.832 [2024-12-15 05:58:22.928520] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:01.832 [2024-12-15 05:58:22.928565] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:01.832 [2024-12-15 05:58:22.928593] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:01.832 [2024-12-15 05:58:22.928603] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:01.832 [2024-12-15 05:58:22.928629] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:02.400 00:18:02.400 Latency(us) 00:18:02.400 [2024-12-15T05:58:24.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.400 [2024-12-15T05:58:24.041Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:02.400 Verification LBA range: start 0x0 length 0x4000 00:18:02.400 NVMe0n1 : 8.17 1932.29 7.55 15.68 0.00 65614.82 2740.60 7015926.69 00:18:02.400 [2024-12-15T05:58:24.041Z] =================================================================================================================== 00:18:02.400 [2024-12-15T05:58:24.041Z] Total : 1932.29 7.55 15.68 0.00 65614.82 2740.60 7015926.69 00:18:02.400 0 00:18:02.967 05:58:24 -- host/timeout.sh@62 -- # get_controller 00:18:02.967 05:58:24 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:02.968 05:58:24 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:03.226 05:58:24 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:18:03.226 05:58:24 -- host/timeout.sh@63 -- # get_bdev 00:18:03.226 05:58:24 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:03.226 05:58:24 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:03.486 05:58:25 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:18:03.486 05:58:25 -- host/timeout.sh@65 -- # wait 85164 00:18:03.486 05:58:25 -- host/timeout.sh@67 -- # killprocess 85140 00:18:03.486 05:58:25 -- common/autotest_common.sh@936 -- # '[' -z 85140 ']' 00:18:03.487 05:58:25 -- common/autotest_common.sh@940 -- # kill -0 85140 00:18:03.487 05:58:25 -- common/autotest_common.sh@941 -- # uname 00:18:03.487 05:58:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:03.487 05:58:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85140 00:18:03.487 killing process with pid 85140 00:18:03.487 Received shutdown signal, test time was about 9.291948 seconds 00:18:03.487 00:18:03.487 Latency(us) 00:18:03.487 [2024-12-15T05:58:25.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.487 [2024-12-15T05:58:25.128Z] =================================================================================================================== 00:18:03.487 [2024-12-15T05:58:25.128Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:03.487 05:58:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:03.487 05:58:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:03.487 05:58:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85140' 00:18:03.487 05:58:25 -- common/autotest_common.sh@955 -- # kill 85140 00:18:03.487 05:58:25 -- common/autotest_common.sh@960 -- # wait 85140 00:18:03.745 05:58:25 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:03.745 [2024-12-15 05:58:25.378641] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:04.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:04.003 05:58:25 -- host/timeout.sh@74 -- # bdevperf_pid=85286 00:18:04.003 05:58:25 -- host/timeout.sh@76 -- # waitforlisten 85286 /var/tmp/bdevperf.sock 00:18:04.003 05:58:25 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:04.003 05:58:25 -- common/autotest_common.sh@829 -- # '[' -z 85286 ']' 00:18:04.003 05:58:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:04.003 05:58:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:04.003 05:58:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:04.003 05:58:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:04.003 05:58:25 -- common/autotest_common.sh@10 -- # set +x 00:18:04.003 [2024-12-15 05:58:25.434073] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:04.003 [2024-12-15 05:58:25.434284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85286 ] 00:18:04.003 [2024-12-15 05:58:25.569679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.003 [2024-12-15 05:58:25.603360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.939 05:58:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.939 05:58:26 -- common/autotest_common.sh@862 -- # return 0 00:18:04.939 05:58:26 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:05.197 05:58:26 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:18:05.456 NVMe0n1 00:18:05.456 05:58:26 -- host/timeout.sh@84 -- # rpc_pid=85311 00:18:05.456 05:58:26 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:05.456 05:58:26 -- host/timeout.sh@86 -- # sleep 1 00:18:05.456 Running I/O for 10 seconds... 00:18:06.391 05:58:27 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:06.651 [2024-12-15 05:58:28.232120] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361b0 is same with the state(5) to be set 00:18:06.651 [2024-12-15 05:58:28.232437] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361b0 is same with the state(5) to be set 00:18:06.651 [2024-12-15 05:58:28.232594] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361b0 is same with the state(5) to be set 00:18:06.651 [2024-12-15 05:58:28.232715] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361b0 is same with the state(5) to be set 00:18:06.651 [2024-12-15 05:58:28.232775] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361b0 is same with the state(5) to be set 00:18:06.651 [2024-12-15 05:58:28.232911] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361b0 is same with the state(5) to be set 00:18:06.651 [2024-12-15 05:58:28.233139] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361b0 is same with the state(5) to be set 00:18:06.651 [2024-12-15 05:58:28.233152] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361b0 is same with the state(5) to be set 00:18:06.651 [2024-12-15 05:58:28.233160] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361b0 is same with the state(5) to be set 00:18:06.651 [2024-12-15 05:58:28.233168] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361b0 is same with the state(5) to be set 00:18:06.651 [2024-12-15 05:58:28.233175] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361b0 is same with the state(5) to be set 00:18:06.651 [2024-12-15 05:58:28.233184] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361b0 is same with the state(5) to be set 00:18:06.651 [2024-12-15 05:58:28.233192] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361b0 is same with the state(5) to be set 00:18:06.651 [2024-12-15 05:58:28.233199] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361b0 is same with the state(5) to be set 00:18:06.651 [2024-12-15 05:58:28.233207] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361b0 is same with the state(5) to be set 00:18:06.651 [2024-12-15 05:58:28.233215] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361b0 is same with the state(5) to be set 00:18:06.651 [2024-12-15 05:58:28.233274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.651 [2024-12-15 05:58:28.233304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.651 [2024-12-15 05:58:28.233326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:130176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.651 [2024-12-15 05:58:28.233336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.651 [2024-12-15 05:58:28.233348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.651 [2024-12-15 05:58:28.233357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.651 [2024-12-15 05:58:28.233368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:130208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.233377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.233388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:130224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.233397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.233423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.233431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.233442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:130256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.233450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.233461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:129560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.233470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.233482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:129568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.233491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.233502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:129576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.233511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.233521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:129584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.233530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.233541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.233549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.233560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:129608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.233569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.233579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:129616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.233588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.233598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:129624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.233622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.233632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.233641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.233651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.233662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.233673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:130288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.652 [2024-12-15 05:58:28.233682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.233692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.233701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.233711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.233720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.233730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:130312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.652 [2024-12-15 05:58:28.233739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.233749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:130320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.233758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.233785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:130328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.233794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.233805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.233813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.233840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:129664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.233849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.233860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.233869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.233880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.233889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.233900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:129696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.233910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.233920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:129720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.233941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.233955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:129728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.233964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.233975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:129744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.233984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.233995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:130336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.652 [2024-12-15 05:58:28.234004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.234014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:130344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.652 [2024-12-15 05:58:28.234040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.234068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:130352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.652 [2024-12-15 05:58:28.234077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.234089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:130360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.652 [2024-12-15 05:58:28.234098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.234110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:130368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.652 [2024-12-15 05:58:28.234119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.234130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:130376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.652 [2024-12-15 05:58:28.234140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.234151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.234161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.234172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.234182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.234193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:130400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.652 [2024-12-15 05:58:28.234202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.234214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:129768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.234224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.234235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.234245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.234256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:129832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.652 [2024-12-15 05:58:28.234265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.652 [2024-12-15 05:58:28.234277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.234287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:129896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.234308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.234329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:129912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.234351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:129920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.234372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.234394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:130416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.653 [2024-12-15 05:58:28.234415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:130424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.234436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:130432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.234456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:130440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.234477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:130448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.653 [2024-12-15 05:58:28.234498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.234519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:130464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.653 [2024-12-15 05:58:28.234540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:130472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.653 [2024-12-15 05:58:28.234561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:130480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.234582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.234603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:130496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.653 [2024-12-15 05:58:28.234638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.234673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:130512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.653 [2024-12-15 05:58:28.234693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.234712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.234732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:130536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.653 [2024-12-15 05:58:28.234753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:130544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.653 [2024-12-15 05:58:28.234774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:130552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.653 [2024-12-15 05:58:28.234793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.234814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:130568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.234834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:130576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.653 [2024-12-15 05:58:28.234853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.234873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.234892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:130600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.234929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:129928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.234959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:129960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.234981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.234992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.235001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.235012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:129976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.235022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.235033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:129984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.235042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.235053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.235062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.235073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.235083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.235094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.235104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.235115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:130608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.653 [2024-12-15 05:58:28.235125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.235166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:130616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.235177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.653 [2024-12-15 05:58:28.235189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.653 [2024-12-15 05:58:28.235198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:130632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.654 [2024-12-15 05:58:28.235219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.654 [2024-12-15 05:58:28.235241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:130648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.654 [2024-12-15 05:58:28.235262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.654 [2024-12-15 05:58:28.235284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:130664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.654 [2024-12-15 05:58:28.235305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:130672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.654 [2024-12-15 05:58:28.235326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.654 [2024-12-15 05:58:28.235347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:130688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.654 [2024-12-15 05:58:28.235368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.654 [2024-12-15 05:58:28.235389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.654 [2024-12-15 05:58:28.235410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.654 [2024-12-15 05:58:28.235431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.654 [2024-12-15 05:58:28.235456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.654 [2024-12-15 05:58:28.235506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:130048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.654 [2024-12-15 05:58:28.235542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.654 [2024-12-15 05:58:28.235561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.654 [2024-12-15 05:58:28.235581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.654 [2024-12-15 05:58:28.235601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:130104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.654 [2024-12-15 05:58:28.235621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:130136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.654 [2024-12-15 05:58:28.235641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.654 [2024-12-15 05:58:28.235661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.654 [2024-12-15 05:58:28.235681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:130736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.654 [2024-12-15 05:58:28.235701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:130744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.654 [2024-12-15 05:58:28.235720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:130752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.654 [2024-12-15 05:58:28.235740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:130760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.654 [2024-12-15 05:58:28.235760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:130768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.654 [2024-12-15 05:58:28.235781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:130776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.654 [2024-12-15 05:58:28.235801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.654 [2024-12-15 05:58:28.235823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.654 [2024-12-15 05:58:28.235845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:130800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.654 [2024-12-15 05:58:28.235864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:130808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.654 [2024-12-15 05:58:28.235884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:130816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.654 [2024-12-15 05:58:28.235904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:130824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.654 [2024-12-15 05:58:28.235935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:130832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.654 [2024-12-15 05:58:28.235955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.654 [2024-12-15 05:58:28.235976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.235986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:130848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.654 [2024-12-15 05:58:28.235995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.236006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.654 [2024-12-15 05:58:28.236015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.236026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.654 [2024-12-15 05:58:28.236036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.236047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:130184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.654 [2024-12-15 05:58:28.236056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.236067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.654 [2024-12-15 05:58:28.236076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.654 [2024-12-15 05:58:28.236087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:130216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.655 [2024-12-15 05:58:28.236096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.655 [2024-12-15 05:58:28.236107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:130232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.655 [2024-12-15 05:58:28.236116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.655 [2024-12-15 05:58:28.236126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.655 [2024-12-15 05:58:28.236136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.655 [2024-12-15 05:58:28.236146] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9870 is same with the state(5) to be set 00:18:06.655 [2024-12-15 05:58:28.236160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:06.655 [2024-12-15 05:58:28.236169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:06.655 [2024-12-15 05:58:28.236178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130272 len:8 PRP1 0x0 PRP2 0x0 00:18:06.655 [2024-12-15 05:58:28.236187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.655 [2024-12-15 05:58:28.236226] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bf9870 was disconnected and freed. reset controller. 00:18:06.655 [2024-12-15 05:58:28.236462] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:06.655 [2024-12-15 05:58:28.236535] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfe450 (9): Bad file descriptor 00:18:06.655 [2024-12-15 05:58:28.236628] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:06.655 [2024-12-15 05:58:28.236685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:06.655 [2024-12-15 05:58:28.236724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:06.655 [2024-12-15 05:58:28.236739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bfe450 with addr=10.0.0.2, port=4420 00:18:06.655 [2024-12-15 05:58:28.236750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfe450 is same with the state(5) to be set 00:18:06.655 [2024-12-15 05:58:28.236768] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfe450 (9): Bad file descriptor 00:18:06.655 [2024-12-15 05:58:28.236784] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:06.655 [2024-12-15 05:58:28.236792] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:06.655 [2024-12-15 05:58:28.236804] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:06.655 [2024-12-15 05:58:28.236824] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:06.655 [2024-12-15 05:58:28.236834] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:06.655 05:58:28 -- host/timeout.sh@90 -- # sleep 1 00:18:08.030 [2024-12-15 05:58:29.236936] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:08.030 [2024-12-15 05:58:29.237033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:08.030 [2024-12-15 05:58:29.237071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:08.030 [2024-12-15 05:58:29.237086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bfe450 with addr=10.0.0.2, port=4420 00:18:08.030 [2024-12-15 05:58:29.237098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfe450 is same with the state(5) to be set 00:18:08.030 [2024-12-15 05:58:29.237123] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfe450 (9): Bad file descriptor 00:18:08.030 [2024-12-15 05:58:29.237140] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:08.030 [2024-12-15 05:58:29.237149] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:08.030 [2024-12-15 05:58:29.237158] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:08.030 [2024-12-15 05:58:29.237182] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:08.030 [2024-12-15 05:58:29.237193] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:08.030 05:58:29 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:08.030 [2024-12-15 05:58:29.506553] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:08.030 05:58:29 -- host/timeout.sh@92 -- # wait 85311 00:18:08.964 [2024-12-15 05:58:30.248286] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:15.576 00:18:15.576 Latency(us) 00:18:15.576 [2024-12-15T05:58:37.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.576 [2024-12-15T05:58:37.217Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:15.576 Verification LBA range: start 0x0 length 0x4000 00:18:15.576 NVMe0n1 : 10.01 9954.28 38.88 0.00 0.00 12836.21 875.05 3019898.88 00:18:15.576 [2024-12-15T05:58:37.217Z] =================================================================================================================== 00:18:15.576 [2024-12-15T05:58:37.217Z] Total : 9954.28 38.88 0.00 0.00 12836.21 875.05 3019898.88 00:18:15.576 0 00:18:15.576 05:58:37 -- host/timeout.sh@97 -- # rpc_pid=85416 00:18:15.576 05:58:37 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:15.576 05:58:37 -- host/timeout.sh@98 -- # sleep 1 00:18:15.836 Running I/O for 10 seconds... 00:18:16.776 05:58:38 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:16.776 [2024-12-15 05:58:38.353047] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353100] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353130] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353138] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353146] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353154] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353162] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353170] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353178] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353186] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353194] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353202] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353209] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353233] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353242] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353251] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353259] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353267] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353291] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353299] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353306] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353314] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353322] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353330] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353337] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353346] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353354] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353362] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353377] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353385] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353393] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.776 [2024-12-15 05:58:38.353401] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1933d80 is same with the state(5) to be set 00:18:16.777 [2024-12-15 05:58:38.353460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.353490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.353512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.353522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.353535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.353544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.353555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.353565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.353591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.353600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.353611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.353620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.353631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.353640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.353651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.353660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.353671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:123184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.353680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.353692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:123192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.353701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.353712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:123208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.353728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.353739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.353748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.353759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:123224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.353768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.353778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:123232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.353787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.353798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:123240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.353807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.353818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:123256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.353827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.353837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:123264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.353848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.353860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:123296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.353868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.353879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.353888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.353899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.353908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.353935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.353960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.353976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.353985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.353996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.354006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.354018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.354027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.354039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.354048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.354059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.354068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.354079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:123304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.354088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.354100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:123312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.777 [2024-12-15 05:58:38.354110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.354121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:123320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.354130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.354141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:123328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.354150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.354161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.777 [2024-12-15 05:58:38.354186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.354197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:123344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.777 [2024-12-15 05:58:38.354207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.354218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.777 [2024-12-15 05:58:38.354228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.354240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.777 [2024-12-15 05:58:38.354250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.354262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.777 [2024-12-15 05:58:38.354271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.777 [2024-12-15 05:58:38.354282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:123376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.778 [2024-12-15 05:58:38.354291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.778 [2024-12-15 05:58:38.354312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:123392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.778 [2024-12-15 05:58:38.354333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.778 [2024-12-15 05:58:38.354353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.778 [2024-12-15 05:58:38.354375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:123416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.778 [2024-12-15 05:58:38.354396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.778 [2024-12-15 05:58:38.354417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.778 [2024-12-15 05:58:38.354438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.778 [2024-12-15 05:58:38.354458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:123448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.778 [2024-12-15 05:58:38.354479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.778 [2024-12-15 05:58:38.354500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.778 [2024-12-15 05:58:38.354520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.778 [2024-12-15 05:58:38.354541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:122848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.778 [2024-12-15 05:58:38.354563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:122864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.778 [2024-12-15 05:58:38.354583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.778 [2024-12-15 05:58:38.354604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.778 [2024-12-15 05:58:38.354625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:122936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.778 [2024-12-15 05:58:38.354646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:123456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.778 [2024-12-15 05:58:38.354666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.778 [2024-12-15 05:58:38.354687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:123472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.778 [2024-12-15 05:58:38.354708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:123480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.778 [2024-12-15 05:58:38.354729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:123488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.778 [2024-12-15 05:58:38.354749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.778 [2024-12-15 05:58:38.354770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:123504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.778 [2024-12-15 05:58:38.354791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:123512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.778 [2024-12-15 05:58:38.354812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.778 [2024-12-15 05:58:38.354832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.778 [2024-12-15 05:58:38.354858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:123536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.778 [2024-12-15 05:58:38.354879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:123544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.778 [2024-12-15 05:58:38.354912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:123552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.778 [2024-12-15 05:58:38.354933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.778 [2024-12-15 05:58:38.354954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:123568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.778 [2024-12-15 05:58:38.354975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.354986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.778 [2024-12-15 05:58:38.354995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.355007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:123584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.778 [2024-12-15 05:58:38.355016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.355027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:123592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.778 [2024-12-15 05:58:38.355037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.778 [2024-12-15 05:58:38.355048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:123600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.779 [2024-12-15 05:58:38.355057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.779 [2024-12-15 05:58:38.355078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:122976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.779 [2024-12-15 05:58:38.355098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:122992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.779 [2024-12-15 05:58:38.355119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:123000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.779 [2024-12-15 05:58:38.355150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:123016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.779 [2024-12-15 05:58:38.355171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:123024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.779 [2024-12-15 05:58:38.355192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:123040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.779 [2024-12-15 05:58:38.355213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:123048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.779 [2024-12-15 05:58:38.355233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:123608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.779 [2024-12-15 05:58:38.355255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:123616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.779 [2024-12-15 05:58:38.355281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:123624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.779 [2024-12-15 05:58:38.355303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.779 [2024-12-15 05:58:38.355324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.779 [2024-12-15 05:58:38.355344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:123648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.779 [2024-12-15 05:58:38.355365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:123656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.779 [2024-12-15 05:58:38.355386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:123664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.779 [2024-12-15 05:58:38.355407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:123672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.779 [2024-12-15 05:58:38.355428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:123680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.779 [2024-12-15 05:58:38.355448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:123688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.779 [2024-12-15 05:58:38.355469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.779 [2024-12-15 05:58:38.355490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:123704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.779 [2024-12-15 05:58:38.355511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.779 [2024-12-15 05:58:38.355531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:123720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.779 [2024-12-15 05:58:38.355552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:123728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.779 [2024-12-15 05:58:38.355573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:123736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.779 [2024-12-15 05:58:38.355594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:123744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.779 [2024-12-15 05:58:38.355618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:123752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.779 [2024-12-15 05:58:38.355639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:123080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.779 [2024-12-15 05:58:38.355660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:123088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.779 [2024-12-15 05:58:38.355680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:123104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.779 [2024-12-15 05:58:38.355701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:123144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.779 [2024-12-15 05:58:38.355722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.779 [2024-12-15 05:58:38.355742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:123160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.779 [2024-12-15 05:58:38.355763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:123168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.779 [2024-12-15 05:58:38.355784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.779 [2024-12-15 05:58:38.355795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:123176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.780 [2024-12-15 05:58:38.355805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.780 [2024-12-15 05:58:38.355816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:123760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.780 [2024-12-15 05:58:38.355825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.780 [2024-12-15 05:58:38.355837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.780 [2024-12-15 05:58:38.355846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.780 [2024-12-15 05:58:38.355858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:123776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.780 [2024-12-15 05:58:38.355867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.780 [2024-12-15 05:58:38.355889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:123784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.780 [2024-12-15 05:58:38.355899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.780 [2024-12-15 05:58:38.355911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:123792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.780 [2024-12-15 05:58:38.355921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.780 [2024-12-15 05:58:38.355937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:123800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.780 [2024-12-15 05:58:38.355948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.780 [2024-12-15 05:58:38.355961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:123808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.780 [2024-12-15 05:58:38.355971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.780 [2024-12-15 05:58:38.355982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:123816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.780 [2024-12-15 05:58:38.355991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.780 [2024-12-15 05:58:38.356003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:123824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.780 [2024-12-15 05:58:38.356012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.780 [2024-12-15 05:58:38.356024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.780 [2024-12-15 05:58:38.356033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.780 [2024-12-15 05:58:38.356044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:123840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.780 [2024-12-15 05:58:38.356054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.780 [2024-12-15 05:58:38.356065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:123848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.780 [2024-12-15 05:58:38.356075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.780 [2024-12-15 05:58:38.356086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:123856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.780 [2024-12-15 05:58:38.356096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.780 [2024-12-15 05:58:38.356107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:123864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.780 [2024-12-15 05:58:38.356116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.780 [2024-12-15 05:58:38.356128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:123872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.780 [2024-12-15 05:58:38.356138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.780 [2024-12-15 05:58:38.356149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:123880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.780 [2024-12-15 05:58:38.356159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.780 [2024-12-15 05:58:38.356170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:123200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.780 [2024-12-15 05:58:38.356180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.780 [2024-12-15 05:58:38.356191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:123248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.780 [2024-12-15 05:58:38.356200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.780 [2024-12-15 05:58:38.356212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:123272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.780 [2024-12-15 05:58:38.356221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.780 [2024-12-15 05:58:38.356233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:123280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.780 [2024-12-15 05:58:38.356242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.780 [2024-12-15 05:58:38.356253] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb1680 is same with the state(5) to be set 00:18:16.780 [2024-12-15 05:58:38.356265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.780 [2024-12-15 05:58:38.356276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.780 [2024-12-15 05:58:38.356285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123288 len:8 PRP1 0x0 PRP2 0x0 00:18:16.780 [2024-12-15 05:58:38.356297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.780 [2024-12-15 05:58:38.356339] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cb1680 was disconnected and freed. reset controller. 00:18:16.780 [2024-12-15 05:58:38.356585] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:16.780 [2024-12-15 05:58:38.356657] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfe450 (9): Bad file descriptor 00:18:16.780 [2024-12-15 05:58:38.356756] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:16.780 [2024-12-15 05:58:38.356806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:16.780 [2024-12-15 05:58:38.356846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:16.780 [2024-12-15 05:58:38.356862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bfe450 with addr=10.0.0.2, port=4420 00:18:16.780 [2024-12-15 05:58:38.356887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfe450 is same with the state(5) to be set 00:18:16.780 [2024-12-15 05:58:38.356909] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfe450 (9): Bad file descriptor 00:18:16.780 [2024-12-15 05:58:38.356925] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:16.780 [2024-12-15 05:58:38.356936] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:16.780 [2024-12-15 05:58:38.356946] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:16.780 [2024-12-15 05:58:38.356967] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:16.780 [2024-12-15 05:58:38.356979] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:16.780 05:58:38 -- host/timeout.sh@101 -- # sleep 3 00:18:18.157 [2024-12-15 05:58:39.357104] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:18.157 [2024-12-15 05:58:39.357220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:18.157 [2024-12-15 05:58:39.357262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:18.157 [2024-12-15 05:58:39.357278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bfe450 with addr=10.0.0.2, port=4420 00:18:18.157 [2024-12-15 05:58:39.357292] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfe450 is same with the state(5) to be set 00:18:18.157 [2024-12-15 05:58:39.357349] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfe450 (9): Bad file descriptor 00:18:18.157 [2024-12-15 05:58:39.357368] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:18.157 [2024-12-15 05:58:39.357394] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:18.157 [2024-12-15 05:58:39.357405] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:18.157 [2024-12-15 05:58:39.357432] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:18.157 [2024-12-15 05:58:39.357444] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:18.725 [2024-12-15 05:58:40.357576] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:18.725 [2024-12-15 05:58:40.357675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:18.725 [2024-12-15 05:58:40.357715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:18.725 [2024-12-15 05:58:40.357730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bfe450 with addr=10.0.0.2, port=4420 00:18:18.725 [2024-12-15 05:58:40.357743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfe450 is same with the state(5) to be set 00:18:18.725 [2024-12-15 05:58:40.357769] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfe450 (9): Bad file descriptor 00:18:18.725 [2024-12-15 05:58:40.357788] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:18.725 [2024-12-15 05:58:40.357797] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:18.725 [2024-12-15 05:58:40.357807] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:18.725 [2024-12-15 05:58:40.357832] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:18.725 [2024-12-15 05:58:40.357843] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:20.102 [2024-12-15 05:58:41.359565] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:20.102 [2024-12-15 05:58:41.359914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:20.102 [2024-12-15 05:58:41.359968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:20.102 [2024-12-15 05:58:41.359987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bfe450 with addr=10.0.0.2, port=4420 00:18:20.102 [2024-12-15 05:58:41.360002] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfe450 is same with the state(5) to be set 00:18:20.102 [2024-12-15 05:58:41.360115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfe450 (9): Bad file descriptor 00:18:20.102 [2024-12-15 05:58:41.360303] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:20.102 [2024-12-15 05:58:41.360332] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:20.102 [2024-12-15 05:58:41.360356] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:20.102 [2024-12-15 05:58:41.362791] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:20.102 [2024-12-15 05:58:41.362818] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:20.102 05:58:41 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:20.102 [2024-12-15 05:58:41.626464] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.102 05:58:41 -- host/timeout.sh@103 -- # wait 85416 00:18:21.039 [2024-12-15 05:58:42.397373] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:26.373 00:18:26.373 Latency(us) 00:18:26.373 [2024-12-15T05:58:48.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.373 [2024-12-15T05:58:48.014Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:26.373 Verification LBA range: start 0x0 length 0x4000 00:18:26.373 NVMe0n1 : 10.01 8339.41 32.58 6116.06 0.00 8840.02 484.07 3019898.88 00:18:26.373 [2024-12-15T05:58:48.014Z] =================================================================================================================== 00:18:26.373 [2024-12-15T05:58:48.014Z] Total : 8339.41 32.58 6116.06 0.00 8840.02 0.00 3019898.88 00:18:26.373 0 00:18:26.373 05:58:47 -- host/timeout.sh@105 -- # killprocess 85286 00:18:26.373 05:58:47 -- common/autotest_common.sh@936 -- # '[' -z 85286 ']' 00:18:26.373 05:58:47 -- common/autotest_common.sh@940 -- # kill -0 85286 00:18:26.373 05:58:47 -- common/autotest_common.sh@941 -- # uname 00:18:26.373 05:58:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:26.373 05:58:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85286 00:18:26.373 killing process with pid 85286 00:18:26.373 Received shutdown signal, test time was about 10.000000 seconds 00:18:26.373 00:18:26.373 Latency(us) 00:18:26.373 [2024-12-15T05:58:48.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.373 [2024-12-15T05:58:48.014Z] =================================================================================================================== 00:18:26.373 [2024-12-15T05:58:48.014Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:26.373 05:58:47 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:26.373 05:58:47 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:26.373 05:58:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85286' 00:18:26.373 05:58:47 -- common/autotest_common.sh@955 -- # kill 85286 00:18:26.373 05:58:47 -- common/autotest_common.sh@960 -- # wait 85286 00:18:26.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:26.373 05:58:47 -- host/timeout.sh@110 -- # bdevperf_pid=85530 00:18:26.373 05:58:47 -- host/timeout.sh@112 -- # waitforlisten 85530 /var/tmp/bdevperf.sock 00:18:26.373 05:58:47 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:18:26.373 05:58:47 -- common/autotest_common.sh@829 -- # '[' -z 85530 ']' 00:18:26.373 05:58:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:26.373 05:58:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:26.373 05:58:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:26.373 05:58:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:26.373 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:18:26.373 [2024-12-15 05:58:47.477146] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:26.373 [2024-12-15 05:58:47.477511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85530 ] 00:18:26.373 [2024-12-15 05:58:47.615831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.373 [2024-12-15 05:58:47.648434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:26.940 05:58:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:26.940 05:58:48 -- common/autotest_common.sh@862 -- # return 0 00:18:26.940 05:58:48 -- host/timeout.sh@116 -- # dtrace_pid=85547 00:18:26.940 05:58:48 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 85530 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:18:26.940 05:58:48 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:18:27.198 05:58:48 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:27.457 NVMe0n1 00:18:27.457 05:58:49 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:27.457 05:58:49 -- host/timeout.sh@124 -- # rpc_pid=85593 00:18:27.457 05:58:49 -- host/timeout.sh@125 -- # sleep 1 00:18:27.716 Running I/O for 10 seconds... 00:18:28.651 05:58:50 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:28.913 [2024-12-15 05:58:50.299399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.299455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.299511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.299537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.299562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.299571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.299581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.299591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.299601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:124936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.299625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.299636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.299646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.299657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.299666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.299677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:48264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.299686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.299697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.299707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.299717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.299727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.299738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:121368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.299747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.299758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:58272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.299767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.299779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:130600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.299788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.299801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:26392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.299810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.299821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:26056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.299831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.299842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:37064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.299851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.299862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:34664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.299871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.299885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.299895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.299906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.299915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.299926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.299936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.299994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:41968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:115520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:114720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:89424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:36160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:43056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:127224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:69248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:86672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:124912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:55016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:45448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:71040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:53136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:116088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:123280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:89464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:89528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.300955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.300978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.301005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:56288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.301013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.301023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:105304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.301031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.301050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:47560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.301059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.301069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.301077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.301087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.913 [2024-12-15 05:58:50.301095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.913 [2024-12-15 05:58:50.301105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:114248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:68104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:115128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:55712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:50816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:124480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:54456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:105432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.301984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.301995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.302005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.302016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.302039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.302050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.302058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.302069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:68024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.302078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.302088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.302097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.302107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.302116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.302126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.302135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.302146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.302154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.302165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:121128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.302173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.302184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.302192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.302203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.302211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.302222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.302230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.302256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:129944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.302282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.302310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.302319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.302329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.302337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.302347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.914 [2024-12-15 05:58:50.302356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.302365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9719f0 is same with the state(5) to be set 00:18:28.914 [2024-12-15 05:58:50.302377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:28.914 [2024-12-15 05:58:50.302385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:28.914 [2024-12-15 05:58:50.302392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79696 len:8 PRP1 0x0 PRP2 0x0 00:18:28.914 [2024-12-15 05:58:50.302401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.302441] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9719f0 was disconnected and freed. reset controller. 00:18:28.914 [2024-12-15 05:58:50.302515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.914 [2024-12-15 05:58:50.302530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.302540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.914 [2024-12-15 05:58:50.302548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.302557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.914 [2024-12-15 05:58:50.302566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.302574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.914 [2024-12-15 05:58:50.302582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.914 [2024-12-15 05:58:50.302591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976470 is same with the state(5) to be set 00:18:28.914 [2024-12-15 05:58:50.302861] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:28.914 [2024-12-15 05:58:50.302897] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976470 (9): Bad file descriptor 00:18:28.914 [2024-12-15 05:58:50.303010] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.914 [2024-12-15 05:58:50.303074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.914 [2024-12-15 05:58:50.303115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.914 [2024-12-15 05:58:50.303142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976470 with addr=10.0.0.2, port=4420 00:18:28.914 [2024-12-15 05:58:50.303170] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976470 is same with the state(5) to be set 00:18:28.914 [2024-12-15 05:58:50.303190] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976470 (9): Bad file descriptor 00:18:28.914 [2024-12-15 05:58:50.303219] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:28.914 [2024-12-15 05:58:50.303231] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:28.914 [2024-12-15 05:58:50.303240] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:28.914 [2024-12-15 05:58:50.303264] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:28.914 [2024-12-15 05:58:50.303276] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:28.914 05:58:50 -- host/timeout.sh@128 -- # wait 85593 00:18:30.818 [2024-12-15 05:58:52.303465] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:30.818 [2024-12-15 05:58:52.303826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:30.818 [2024-12-15 05:58:52.303914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:30.818 [2024-12-15 05:58:52.303950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976470 with addr=10.0.0.2, port=4420 00:18:30.818 [2024-12-15 05:58:52.303964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976470 is same with the state(5) to be set 00:18:30.818 [2024-12-15 05:58:52.303997] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976470 (9): Bad file descriptor 00:18:30.818 [2024-12-15 05:58:52.304030] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:30.818 [2024-12-15 05:58:52.304042] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:30.818 [2024-12-15 05:58:52.304052] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:30.818 [2024-12-15 05:58:52.304092] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:30.818 [2024-12-15 05:58:52.304103] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:32.721 [2024-12-15 05:58:54.304258] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:32.722 [2024-12-15 05:58:54.304352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:32.722 [2024-12-15 05:58:54.304393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:32.722 [2024-12-15 05:58:54.304407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976470 with addr=10.0.0.2, port=4420 00:18:32.722 [2024-12-15 05:58:54.304420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x976470 is same with the state(5) to be set 00:18:32.722 [2024-12-15 05:58:54.304443] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x976470 (9): Bad file descriptor 00:18:32.722 [2024-12-15 05:58:54.304460] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:32.722 [2024-12-15 05:58:54.304469] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:32.722 [2024-12-15 05:58:54.304479] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:32.722 [2024-12-15 05:58:54.304504] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:32.722 [2024-12-15 05:58:54.304514] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.253 [2024-12-15 05:58:56.304577] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.253 [2024-12-15 05:58:56.304632] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.253 [2024-12-15 05:58:56.304642] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.253 [2024-12-15 05:58:56.304651] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:35.253 [2024-12-15 05:58:56.304679] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.849 00:18:35.849 Latency(us) 00:18:35.849 [2024-12-15T05:58:57.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.849 [2024-12-15T05:58:57.490Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:18:35.849 NVMe0n1 : 8.12 2236.32 8.74 15.76 0.00 56788.62 7030.23 7015926.69 00:18:35.849 [2024-12-15T05:58:57.490Z] =================================================================================================================== 00:18:35.849 [2024-12-15T05:58:57.490Z] Total : 2236.32 8.74 15.76 0.00 56788.62 7030.23 7015926.69 00:18:35.849 0 00:18:35.849 05:58:57 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:35.849 Attaching 5 probes... 00:18:35.849 1365.868914: reset bdev controller NVMe0 00:18:35.849 1365.951508: reconnect bdev controller NVMe0 00:18:35.849 3366.336690: reconnect delay bdev controller NVMe0 00:18:35.849 3366.374214: reconnect bdev controller NVMe0 00:18:35.849 5367.142650: reconnect delay bdev controller NVMe0 00:18:35.849 5367.178949: reconnect bdev controller NVMe0 00:18:35.849 7367.553948: reconnect delay bdev controller NVMe0 00:18:35.849 7367.590023: reconnect bdev controller NVMe0 00:18:35.849 05:58:57 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:18:35.849 05:58:57 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:18:35.849 05:58:57 -- host/timeout.sh@136 -- # kill 85547 00:18:35.849 05:58:57 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:35.849 05:58:57 -- host/timeout.sh@139 -- # killprocess 85530 00:18:35.849 05:58:57 -- common/autotest_common.sh@936 -- # '[' -z 85530 ']' 00:18:35.849 05:58:57 -- common/autotest_common.sh@940 -- # kill -0 85530 00:18:35.849 05:58:57 -- common/autotest_common.sh@941 -- # uname 00:18:35.849 05:58:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:35.849 05:58:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85530 00:18:35.849 killing process with pid 85530 00:18:35.849 Received shutdown signal, test time was about 8.183921 seconds 00:18:35.849 00:18:35.849 Latency(us) 00:18:35.849 [2024-12-15T05:58:57.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.849 [2024-12-15T05:58:57.490Z] =================================================================================================================== 00:18:35.849 [2024-12-15T05:58:57.490Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:35.849 05:58:57 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:35.849 05:58:57 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:35.849 05:58:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85530' 00:18:35.849 05:58:57 -- common/autotest_common.sh@955 -- # kill 85530 00:18:35.849 05:58:57 -- common/autotest_common.sh@960 -- # wait 85530 00:18:36.108 05:58:57 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:36.367 05:58:57 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:18:36.367 05:58:57 -- host/timeout.sh@145 -- # nvmftestfini 00:18:36.367 05:58:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:36.367 05:58:57 -- nvmf/common.sh@116 -- # sync 00:18:36.367 05:58:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:36.367 05:58:57 -- nvmf/common.sh@119 -- # set +e 00:18:36.367 05:58:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:36.367 05:58:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:36.367 rmmod nvme_tcp 00:18:36.367 rmmod nvme_fabrics 00:18:36.367 rmmod nvme_keyring 00:18:36.367 05:58:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:36.367 05:58:57 -- nvmf/common.sh@123 -- # set -e 00:18:36.367 05:58:57 -- nvmf/common.sh@124 -- # return 0 00:18:36.367 05:58:57 -- nvmf/common.sh@477 -- # '[' -n 85099 ']' 00:18:36.367 05:58:57 -- nvmf/common.sh@478 -- # killprocess 85099 00:18:36.367 05:58:57 -- common/autotest_common.sh@936 -- # '[' -z 85099 ']' 00:18:36.367 05:58:57 -- common/autotest_common.sh@940 -- # kill -0 85099 00:18:36.367 05:58:57 -- common/autotest_common.sh@941 -- # uname 00:18:36.367 05:58:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:36.367 05:58:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85099 00:18:36.367 killing process with pid 85099 00:18:36.367 05:58:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:36.367 05:58:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:36.367 05:58:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85099' 00:18:36.367 05:58:57 -- common/autotest_common.sh@955 -- # kill 85099 00:18:36.367 05:58:57 -- common/autotest_common.sh@960 -- # wait 85099 00:18:36.626 05:58:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:36.626 05:58:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:36.626 05:58:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:36.626 05:58:58 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:36.626 05:58:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:36.626 05:58:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.626 05:58:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:36.626 05:58:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.626 05:58:58 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:36.626 ************************************ 00:18:36.626 END TEST nvmf_timeout 00:18:36.626 ************************************ 00:18:36.626 00:18:36.626 real 0m46.277s 00:18:36.626 user 2m17.105s 00:18:36.626 sys 0m5.235s 00:18:36.626 05:58:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:36.626 05:58:58 -- common/autotest_common.sh@10 -- # set +x 00:18:36.626 05:58:58 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:18:36.626 05:58:58 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:18:36.626 05:58:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:36.626 05:58:58 -- common/autotest_common.sh@10 -- # set +x 00:18:36.626 05:58:58 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:18:36.626 ************************************ 00:18:36.626 END TEST nvmf_tcp 00:18:36.626 ************************************ 00:18:36.626 00:18:36.626 real 10m23.388s 00:18:36.626 user 29m8.197s 00:18:36.626 sys 3m22.097s 00:18:36.626 05:58:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:36.626 05:58:58 -- common/autotest_common.sh@10 -- # set +x 00:18:36.626 05:58:58 -- spdk/autotest.sh@283 -- # [[ 1 -eq 0 ]] 00:18:36.626 05:58:58 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:36.626 05:58:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:36.626 05:58:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:36.626 05:58:58 -- common/autotest_common.sh@10 -- # set +x 00:18:36.884 ************************************ 00:18:36.884 START TEST nvmf_dif 00:18:36.884 ************************************ 00:18:36.884 05:58:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:36.884 * Looking for test storage... 00:18:36.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:36.884 05:58:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:36.884 05:58:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:36.884 05:58:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:36.884 05:58:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:36.884 05:58:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:36.884 05:58:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:36.884 05:58:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:36.884 05:58:58 -- scripts/common.sh@335 -- # IFS=.-: 00:18:36.884 05:58:58 -- scripts/common.sh@335 -- # read -ra ver1 00:18:36.884 05:58:58 -- scripts/common.sh@336 -- # IFS=.-: 00:18:36.884 05:58:58 -- scripts/common.sh@336 -- # read -ra ver2 00:18:36.884 05:58:58 -- scripts/common.sh@337 -- # local 'op=<' 00:18:36.884 05:58:58 -- scripts/common.sh@339 -- # ver1_l=2 00:18:36.884 05:58:58 -- scripts/common.sh@340 -- # ver2_l=1 00:18:36.884 05:58:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:36.884 05:58:58 -- scripts/common.sh@343 -- # case "$op" in 00:18:36.884 05:58:58 -- scripts/common.sh@344 -- # : 1 00:18:36.884 05:58:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:36.884 05:58:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:36.884 05:58:58 -- scripts/common.sh@364 -- # decimal 1 00:18:36.884 05:58:58 -- scripts/common.sh@352 -- # local d=1 00:18:36.884 05:58:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:36.884 05:58:58 -- scripts/common.sh@354 -- # echo 1 00:18:36.884 05:58:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:36.884 05:58:58 -- scripts/common.sh@365 -- # decimal 2 00:18:36.884 05:58:58 -- scripts/common.sh@352 -- # local d=2 00:18:36.884 05:58:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:36.884 05:58:58 -- scripts/common.sh@354 -- # echo 2 00:18:36.884 05:58:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:36.884 05:58:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:36.884 05:58:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:36.884 05:58:58 -- scripts/common.sh@367 -- # return 0 00:18:36.884 05:58:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:36.884 05:58:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:36.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.884 --rc genhtml_branch_coverage=1 00:18:36.884 --rc genhtml_function_coverage=1 00:18:36.884 --rc genhtml_legend=1 00:18:36.884 --rc geninfo_all_blocks=1 00:18:36.884 --rc geninfo_unexecuted_blocks=1 00:18:36.884 00:18:36.884 ' 00:18:36.884 05:58:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:36.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.884 --rc genhtml_branch_coverage=1 00:18:36.884 --rc genhtml_function_coverage=1 00:18:36.884 --rc genhtml_legend=1 00:18:36.884 --rc geninfo_all_blocks=1 00:18:36.884 --rc geninfo_unexecuted_blocks=1 00:18:36.884 00:18:36.884 ' 00:18:36.884 05:58:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:36.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.884 --rc genhtml_branch_coverage=1 00:18:36.884 --rc genhtml_function_coverage=1 00:18:36.884 --rc genhtml_legend=1 00:18:36.884 --rc geninfo_all_blocks=1 00:18:36.884 --rc geninfo_unexecuted_blocks=1 00:18:36.884 00:18:36.884 ' 00:18:36.884 05:58:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:36.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.884 --rc genhtml_branch_coverage=1 00:18:36.884 --rc genhtml_function_coverage=1 00:18:36.884 --rc genhtml_legend=1 00:18:36.884 --rc geninfo_all_blocks=1 00:18:36.884 --rc geninfo_unexecuted_blocks=1 00:18:36.884 00:18:36.884 ' 00:18:36.884 05:58:58 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:36.884 05:58:58 -- nvmf/common.sh@7 -- # uname -s 00:18:36.884 05:58:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:36.884 05:58:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:36.884 05:58:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:36.884 05:58:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:36.884 05:58:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:36.884 05:58:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:36.884 05:58:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:36.884 05:58:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:36.884 05:58:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:36.884 05:58:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:36.884 05:58:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:18:36.884 05:58:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:18:36.884 05:58:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:36.884 05:58:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:36.884 05:58:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:36.884 05:58:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:36.884 05:58:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:36.884 05:58:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:36.884 05:58:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:36.884 05:58:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.884 05:58:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.884 05:58:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.884 05:58:58 -- paths/export.sh@5 -- # export PATH 00:18:36.884 05:58:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.884 05:58:58 -- nvmf/common.sh@46 -- # : 0 00:18:36.884 05:58:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:36.884 05:58:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:36.884 05:58:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:36.884 05:58:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:36.884 05:58:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:36.884 05:58:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:36.884 05:58:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:36.884 05:58:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:36.884 05:58:58 -- target/dif.sh@15 -- # NULL_META=16 00:18:36.884 05:58:58 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:18:36.884 05:58:58 -- target/dif.sh@15 -- # NULL_SIZE=64 00:18:36.885 05:58:58 -- target/dif.sh@15 -- # NULL_DIF=1 00:18:36.885 05:58:58 -- target/dif.sh@135 -- # nvmftestinit 00:18:36.885 05:58:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:36.885 05:58:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:36.885 05:58:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:36.885 05:58:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:36.885 05:58:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:36.885 05:58:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.885 05:58:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:18:36.885 05:58:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.885 05:58:58 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:36.885 05:58:58 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:36.885 05:58:58 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:36.885 05:58:58 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:36.885 05:58:58 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:36.885 05:58:58 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:36.885 05:58:58 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:36.885 05:58:58 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:36.885 05:58:58 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:36.885 05:58:58 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:36.885 05:58:58 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:36.885 05:58:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:36.885 05:58:58 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:36.885 05:58:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:36.885 05:58:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:36.885 05:58:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:36.885 05:58:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:36.885 05:58:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:36.885 05:58:58 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:36.885 05:58:58 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:36.885 Cannot find device "nvmf_tgt_br" 00:18:36.885 05:58:58 -- nvmf/common.sh@154 -- # true 00:18:36.885 05:58:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:36.885 Cannot find device "nvmf_tgt_br2" 00:18:36.885 05:58:58 -- nvmf/common.sh@155 -- # true 00:18:36.885 05:58:58 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:36.885 05:58:58 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:37.143 Cannot find device "nvmf_tgt_br" 00:18:37.143 05:58:58 -- nvmf/common.sh@157 -- # true 00:18:37.143 05:58:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:37.143 Cannot find device "nvmf_tgt_br2" 00:18:37.143 05:58:58 -- nvmf/common.sh@158 -- # true 00:18:37.143 05:58:58 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:37.143 05:58:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:37.143 05:58:58 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:37.143 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:37.143 05:58:58 -- nvmf/common.sh@161 -- # true 00:18:37.144 05:58:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:37.144 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:37.144 05:58:58 -- nvmf/common.sh@162 -- # true 00:18:37.144 05:58:58 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:37.144 05:58:58 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:37.144 05:58:58 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:37.144 05:58:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:37.144 05:58:58 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:37.144 05:58:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:37.144 05:58:58 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:37.144 05:58:58 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:37.144 05:58:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:37.144 05:58:58 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:37.144 05:58:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:37.144 05:58:58 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:37.144 05:58:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:37.144 05:58:58 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:37.144 05:58:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:37.144 05:58:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:37.144 05:58:58 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:37.144 05:58:58 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:37.144 05:58:58 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:37.144 05:58:58 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:37.144 05:58:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:37.144 05:58:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:37.144 05:58:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:37.144 05:58:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:37.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:37.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:18:37.144 00:18:37.144 --- 10.0.0.2 ping statistics --- 00:18:37.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.144 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:18:37.144 05:58:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:37.144 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:37.144 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:18:37.144 00:18:37.144 --- 10.0.0.3 ping statistics --- 00:18:37.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.144 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:18:37.144 05:58:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:37.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:37.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:18:37.144 00:18:37.144 --- 10.0.0.1 ping statistics --- 00:18:37.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.144 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:18:37.144 05:58:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:37.144 05:58:58 -- nvmf/common.sh@421 -- # return 0 00:18:37.144 05:58:58 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:18:37.144 05:58:58 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:37.712 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:37.712 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:37.712 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:37.712 05:58:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:37.712 05:58:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:37.712 05:58:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:37.712 05:58:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:37.712 05:58:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:37.712 05:58:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:37.712 05:58:59 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:18:37.712 05:58:59 -- target/dif.sh@137 -- # nvmfappstart 00:18:37.712 05:58:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:37.712 05:58:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:37.712 05:58:59 -- common/autotest_common.sh@10 -- # set +x 00:18:37.712 05:58:59 -- nvmf/common.sh@469 -- # nvmfpid=86034 00:18:37.712 05:58:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:37.712 05:58:59 -- nvmf/common.sh@470 -- # waitforlisten 86034 00:18:37.712 05:58:59 -- common/autotest_common.sh@829 -- # '[' -z 86034 ']' 00:18:37.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.712 05:58:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.712 05:58:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:37.712 05:58:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.712 05:58:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:37.712 05:58:59 -- common/autotest_common.sh@10 -- # set +x 00:18:37.712 [2024-12-15 05:58:59.272660] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:37.712 [2024-12-15 05:58:59.273079] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.971 [2024-12-15 05:58:59.417275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.971 [2024-12-15 05:58:59.459293] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:37.971 [2024-12-15 05:58:59.459524] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.971 [2024-12-15 05:58:59.459561] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.971 [2024-12-15 05:58:59.459575] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.971 [2024-12-15 05:58:59.459617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.908 05:59:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:38.908 05:59:00 -- common/autotest_common.sh@862 -- # return 0 00:18:38.908 05:59:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:38.908 05:59:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:38.908 05:59:00 -- common/autotest_common.sh@10 -- # set +x 00:18:38.908 05:59:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.908 05:59:00 -- target/dif.sh@139 -- # create_transport 00:18:38.908 05:59:00 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:18:38.908 05:59:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.908 05:59:00 -- common/autotest_common.sh@10 -- # set +x 00:18:38.908 [2024-12-15 05:59:00.333812] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:38.908 05:59:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.908 05:59:00 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:18:38.908 05:59:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:38.908 05:59:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:38.908 05:59:00 -- common/autotest_common.sh@10 -- # set +x 00:18:38.908 ************************************ 00:18:38.908 START TEST fio_dif_1_default 00:18:38.908 ************************************ 00:18:38.908 05:59:00 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:18:38.908 05:59:00 -- target/dif.sh@86 -- # create_subsystems 0 00:18:38.908 05:59:00 -- target/dif.sh@28 -- # local sub 00:18:38.908 05:59:00 -- target/dif.sh@30 -- # for sub in "$@" 00:18:38.908 05:59:00 -- target/dif.sh@31 -- # create_subsystem 0 00:18:38.908 05:59:00 -- target/dif.sh@18 -- # local sub_id=0 00:18:38.908 05:59:00 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:38.908 05:59:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.908 05:59:00 -- common/autotest_common.sh@10 -- # set +x 00:18:38.908 bdev_null0 00:18:38.908 05:59:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.908 05:59:00 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:38.908 05:59:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.908 05:59:00 -- common/autotest_common.sh@10 -- # set +x 00:18:38.908 05:59:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.908 05:59:00 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:38.908 05:59:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.908 05:59:00 -- common/autotest_common.sh@10 -- # set +x 00:18:38.908 05:59:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.908 05:59:00 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:38.908 05:59:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.908 05:59:00 -- common/autotest_common.sh@10 -- # set +x 00:18:38.908 [2024-12-15 05:59:00.385933] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.908 05:59:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.908 05:59:00 -- target/dif.sh@87 -- # fio /dev/fd/62 00:18:38.908 05:59:00 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:18:38.908 05:59:00 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:38.908 05:59:00 -- nvmf/common.sh@520 -- # config=() 00:18:38.908 05:59:00 -- nvmf/common.sh@520 -- # local subsystem config 00:18:38.908 05:59:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:38.908 05:59:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:38.908 { 00:18:38.908 "params": { 00:18:38.908 "name": "Nvme$subsystem", 00:18:38.908 "trtype": "$TEST_TRANSPORT", 00:18:38.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:38.908 "adrfam": "ipv4", 00:18:38.908 "trsvcid": "$NVMF_PORT", 00:18:38.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:38.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:38.908 "hdgst": ${hdgst:-false}, 00:18:38.908 "ddgst": ${ddgst:-false} 00:18:38.908 }, 00:18:38.908 "method": "bdev_nvme_attach_controller" 00:18:38.908 } 00:18:38.908 EOF 00:18:38.908 )") 00:18:38.908 05:59:00 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:38.908 05:59:00 -- target/dif.sh@82 -- # gen_fio_conf 00:18:38.908 05:59:00 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:38.908 05:59:00 -- target/dif.sh@54 -- # local file 00:18:38.908 05:59:00 -- target/dif.sh@56 -- # cat 00:18:38.908 05:59:00 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:18:38.908 05:59:00 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:38.908 05:59:00 -- nvmf/common.sh@542 -- # cat 00:18:38.908 05:59:00 -- common/autotest_common.sh@1328 -- # local sanitizers 00:18:38.908 05:59:00 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:38.908 05:59:00 -- common/autotest_common.sh@1330 -- # shift 00:18:38.908 05:59:00 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:18:38.908 05:59:00 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:38.908 05:59:00 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:38.908 05:59:00 -- target/dif.sh@72 -- # (( file <= files )) 00:18:38.908 05:59:00 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:38.908 05:59:00 -- nvmf/common.sh@544 -- # jq . 00:18:38.909 05:59:00 -- common/autotest_common.sh@1334 -- # grep libasan 00:18:38.909 05:59:00 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:38.909 05:59:00 -- nvmf/common.sh@545 -- # IFS=, 00:18:38.909 05:59:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:38.909 "params": { 00:18:38.909 "name": "Nvme0", 00:18:38.909 "trtype": "tcp", 00:18:38.909 "traddr": "10.0.0.2", 00:18:38.909 "adrfam": "ipv4", 00:18:38.909 "trsvcid": "4420", 00:18:38.909 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:38.909 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:38.909 "hdgst": false, 00:18:38.909 "ddgst": false 00:18:38.909 }, 00:18:38.909 "method": "bdev_nvme_attach_controller" 00:18:38.909 }' 00:18:38.909 05:59:00 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:38.909 05:59:00 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:38.909 05:59:00 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:38.909 05:59:00 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:38.909 05:59:00 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:18:38.909 05:59:00 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:38.909 05:59:00 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:38.909 05:59:00 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:38.909 05:59:00 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:38.909 05:59:00 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:39.168 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:39.168 fio-3.35 00:18:39.168 Starting 1 thread 00:18:39.427 [2024-12-15 05:59:00.906717] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:39.427 [2024-12-15 05:59:00.907346] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:49.407 00:18:49.407 filename0: (groupid=0, jobs=1): err= 0: pid=86103: Sun Dec 15 05:59:11 2024 00:18:49.407 read: IOPS=9561, BW=37.4MiB/s (39.2MB/s)(374MiB/10001msec) 00:18:49.407 slat (usec): min=6, max=186, avg= 7.98, stdev= 3.60 00:18:49.407 clat (usec): min=317, max=4987, avg=394.83, stdev=53.93 00:18:49.407 lat (usec): min=324, max=5018, avg=402.81, stdev=54.61 00:18:49.407 clat percentiles (usec): 00:18:49.407 | 1.00th=[ 326], 5.00th=[ 338], 10.00th=[ 347], 20.00th=[ 359], 00:18:49.407 | 30.00th=[ 367], 40.00th=[ 375], 50.00th=[ 388], 60.00th=[ 400], 00:18:49.407 | 70.00th=[ 412], 80.00th=[ 429], 90.00th=[ 457], 95.00th=[ 478], 00:18:49.407 | 99.00th=[ 515], 99.50th=[ 529], 99.90th=[ 570], 99.95th=[ 586], 00:18:49.407 | 99.99th=[ 1713] 00:18:49.407 bw ( KiB/s): min=36704, max=39328, per=100.00%, avg=38291.47, stdev=723.38, samples=19 00:18:49.407 iops : min= 9176, max= 9832, avg=9572.84, stdev=180.84, samples=19 00:18:49.407 lat (usec) : 500=97.99%, 750=2.00% 00:18:49.407 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:18:49.407 cpu : usr=85.41%, sys=12.82%, ctx=23, majf=0, minf=8 00:18:49.407 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:49.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.407 issued rwts: total=95628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.407 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:49.407 00:18:49.407 Run status group 0 (all jobs): 00:18:49.407 READ: bw=37.4MiB/s (39.2MB/s), 37.4MiB/s-37.4MiB/s (39.2MB/s-39.2MB/s), io=374MiB (392MB), run=10001-10001msec 00:18:49.666 05:59:11 -- target/dif.sh@88 -- # destroy_subsystems 0 00:18:49.666 05:59:11 -- target/dif.sh@43 -- # local sub 00:18:49.666 05:59:11 -- target/dif.sh@45 -- # for sub in "$@" 00:18:49.666 05:59:11 -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:49.666 05:59:11 -- target/dif.sh@36 -- # local sub_id=0 00:18:49.666 05:59:11 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:49.666 05:59:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.666 05:59:11 -- common/autotest_common.sh@10 -- # set +x 00:18:49.666 05:59:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.666 05:59:11 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:49.666 05:59:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.666 05:59:11 -- common/autotest_common.sh@10 -- # set +x 00:18:49.666 ************************************ 00:18:49.666 END TEST fio_dif_1_default 00:18:49.666 ************************************ 00:18:49.666 05:59:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.666 00:18:49.666 real 0m10.848s 00:18:49.666 user 0m9.062s 00:18:49.666 sys 0m1.500s 00:18:49.666 05:59:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:49.666 05:59:11 -- common/autotest_common.sh@10 -- # set +x 00:18:49.666 05:59:11 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:18:49.666 05:59:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:49.666 05:59:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:49.666 05:59:11 -- common/autotest_common.sh@10 -- # set +x 00:18:49.666 ************************************ 00:18:49.666 START TEST fio_dif_1_multi_subsystems 00:18:49.666 ************************************ 00:18:49.666 05:59:11 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:18:49.666 05:59:11 -- target/dif.sh@92 -- # local files=1 00:18:49.666 05:59:11 -- target/dif.sh@94 -- # create_subsystems 0 1 00:18:49.666 05:59:11 -- target/dif.sh@28 -- # local sub 00:18:49.666 05:59:11 -- target/dif.sh@30 -- # for sub in "$@" 00:18:49.666 05:59:11 -- target/dif.sh@31 -- # create_subsystem 0 00:18:49.666 05:59:11 -- target/dif.sh@18 -- # local sub_id=0 00:18:49.666 05:59:11 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:49.666 05:59:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.666 05:59:11 -- common/autotest_common.sh@10 -- # set +x 00:18:49.666 bdev_null0 00:18:49.666 05:59:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.666 05:59:11 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:49.666 05:59:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.666 05:59:11 -- common/autotest_common.sh@10 -- # set +x 00:18:49.666 05:59:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.666 05:59:11 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:49.666 05:59:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.666 05:59:11 -- common/autotest_common.sh@10 -- # set +x 00:18:49.666 05:59:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.666 05:59:11 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:49.666 05:59:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.666 05:59:11 -- common/autotest_common.sh@10 -- # set +x 00:18:49.666 [2024-12-15 05:59:11.285317] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.666 05:59:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.666 05:59:11 -- target/dif.sh@30 -- # for sub in "$@" 00:18:49.666 05:59:11 -- target/dif.sh@31 -- # create_subsystem 1 00:18:49.666 05:59:11 -- target/dif.sh@18 -- # local sub_id=1 00:18:49.666 05:59:11 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:18:49.666 05:59:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.666 05:59:11 -- common/autotest_common.sh@10 -- # set +x 00:18:49.666 bdev_null1 00:18:49.666 05:59:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.666 05:59:11 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:49.666 05:59:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.666 05:59:11 -- common/autotest_common.sh@10 -- # set +x 00:18:49.937 05:59:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.937 05:59:11 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:49.937 05:59:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.937 05:59:11 -- common/autotest_common.sh@10 -- # set +x 00:18:49.937 05:59:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.937 05:59:11 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:49.937 05:59:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.937 05:59:11 -- common/autotest_common.sh@10 -- # set +x 00:18:49.937 05:59:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.937 05:59:11 -- target/dif.sh@95 -- # fio /dev/fd/62 00:18:49.937 05:59:11 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:18:49.937 05:59:11 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:18:49.937 05:59:11 -- nvmf/common.sh@520 -- # config=() 00:18:49.937 05:59:11 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:49.937 05:59:11 -- nvmf/common.sh@520 -- # local subsystem config 00:18:49.938 05:59:11 -- target/dif.sh@82 -- # gen_fio_conf 00:18:49.938 05:59:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:49.938 05:59:11 -- target/dif.sh@54 -- # local file 00:18:49.938 05:59:11 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:49.938 05:59:11 -- target/dif.sh@56 -- # cat 00:18:49.938 05:59:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:49.938 { 00:18:49.938 "params": { 00:18:49.938 "name": "Nvme$subsystem", 00:18:49.938 "trtype": "$TEST_TRANSPORT", 00:18:49.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:49.938 "adrfam": "ipv4", 00:18:49.938 "trsvcid": "$NVMF_PORT", 00:18:49.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:49.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:49.938 "hdgst": ${hdgst:-false}, 00:18:49.938 "ddgst": ${ddgst:-false} 00:18:49.938 }, 00:18:49.938 "method": "bdev_nvme_attach_controller" 00:18:49.938 } 00:18:49.938 EOF 00:18:49.938 )") 00:18:49.938 05:59:11 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:18:49.938 05:59:11 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:49.938 05:59:11 -- common/autotest_common.sh@1328 -- # local sanitizers 00:18:49.938 05:59:11 -- nvmf/common.sh@542 -- # cat 00:18:49.938 05:59:11 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:49.938 05:59:11 -- common/autotest_common.sh@1330 -- # shift 00:18:49.938 05:59:11 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:18:49.938 05:59:11 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:49.938 05:59:11 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:49.938 05:59:11 -- target/dif.sh@72 -- # (( file <= files )) 00:18:49.938 05:59:11 -- target/dif.sh@73 -- # cat 00:18:49.938 05:59:11 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:49.938 05:59:11 -- common/autotest_common.sh@1334 -- # grep libasan 00:18:49.938 05:59:11 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:49.938 05:59:11 -- target/dif.sh@72 -- # (( file++ )) 00:18:49.938 05:59:11 -- target/dif.sh@72 -- # (( file <= files )) 00:18:49.938 05:59:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:49.938 05:59:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:49.938 { 00:18:49.938 "params": { 00:18:49.938 "name": "Nvme$subsystem", 00:18:49.938 "trtype": "$TEST_TRANSPORT", 00:18:49.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:49.938 "adrfam": "ipv4", 00:18:49.938 "trsvcid": "$NVMF_PORT", 00:18:49.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:49.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:49.938 "hdgst": ${hdgst:-false}, 00:18:49.938 "ddgst": ${ddgst:-false} 00:18:49.938 }, 00:18:49.938 "method": "bdev_nvme_attach_controller" 00:18:49.938 } 00:18:49.938 EOF 00:18:49.938 )") 00:18:49.938 05:59:11 -- nvmf/common.sh@542 -- # cat 00:18:49.938 05:59:11 -- nvmf/common.sh@544 -- # jq . 00:18:49.938 05:59:11 -- nvmf/common.sh@545 -- # IFS=, 00:18:49.938 05:59:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:49.938 "params": { 00:18:49.938 "name": "Nvme0", 00:18:49.938 "trtype": "tcp", 00:18:49.938 "traddr": "10.0.0.2", 00:18:49.938 "adrfam": "ipv4", 00:18:49.938 "trsvcid": "4420", 00:18:49.938 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:49.938 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:49.938 "hdgst": false, 00:18:49.938 "ddgst": false 00:18:49.938 }, 00:18:49.938 "method": "bdev_nvme_attach_controller" 00:18:49.938 },{ 00:18:49.938 "params": { 00:18:49.938 "name": "Nvme1", 00:18:49.938 "trtype": "tcp", 00:18:49.938 "traddr": "10.0.0.2", 00:18:49.938 "adrfam": "ipv4", 00:18:49.938 "trsvcid": "4420", 00:18:49.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:49.938 "hdgst": false, 00:18:49.938 "ddgst": false 00:18:49.938 }, 00:18:49.938 "method": "bdev_nvme_attach_controller" 00:18:49.938 }' 00:18:49.938 05:59:11 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:49.938 05:59:11 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:49.938 05:59:11 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:49.938 05:59:11 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:49.938 05:59:11 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:18:49.938 05:59:11 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:49.938 05:59:11 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:49.938 05:59:11 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:49.938 05:59:11 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:49.938 05:59:11 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:49.938 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:49.938 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:49.938 fio-3.35 00:18:49.938 Starting 2 threads 00:18:50.517 [2024-12-15 05:59:11.922910] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:50.517 [2024-12-15 05:59:11.922988] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:00.490 00:19:00.490 filename0: (groupid=0, jobs=1): err= 0: pid=86263: Sun Dec 15 05:59:22 2024 00:19:00.490 read: IOPS=5064, BW=19.8MiB/s (20.7MB/s)(198MiB/10001msec) 00:19:00.490 slat (nsec): min=6375, max=95868, avg=13248.44, stdev=5097.01 00:19:00.490 clat (usec): min=566, max=1302, avg=754.84, stdev=61.00 00:19:00.490 lat (usec): min=573, max=1329, avg=768.09, stdev=61.78 00:19:00.490 clat percentiles (usec): 00:19:00.490 | 1.00th=[ 635], 5.00th=[ 660], 10.00th=[ 685], 20.00th=[ 701], 00:19:00.490 | 30.00th=[ 717], 40.00th=[ 734], 50.00th=[ 750], 60.00th=[ 766], 00:19:00.490 | 70.00th=[ 783], 80.00th=[ 807], 90.00th=[ 840], 95.00th=[ 865], 00:19:00.490 | 99.00th=[ 906], 99.50th=[ 922], 99.90th=[ 963], 99.95th=[ 971], 00:19:00.490 | 99.99th=[ 1020] 00:19:00.490 bw ( KiB/s): min=19808, max=20854, per=50.03%, avg=20272.32, stdev=273.98, samples=19 00:19:00.490 iops : min= 4952, max= 5213, avg=5068.05, stdev=68.44, samples=19 00:19:00.490 lat (usec) : 750=50.45%, 1000=49.53% 00:19:00.490 lat (msec) : 2=0.02% 00:19:00.490 cpu : usr=90.66%, sys=8.03%, ctx=21, majf=0, minf=0 00:19:00.490 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:00.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.490 issued rwts: total=50648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:00.490 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:00.490 filename1: (groupid=0, jobs=1): err= 0: pid=86264: Sun Dec 15 05:59:22 2024 00:19:00.490 read: IOPS=5064, BW=19.8MiB/s (20.7MB/s)(198MiB/10001msec) 00:19:00.490 slat (nsec): min=6332, max=58942, avg=13792.00, stdev=5387.40 00:19:00.490 clat (usec): min=409, max=1023, avg=751.03, stdev=58.03 00:19:00.490 lat (usec): min=415, max=1064, avg=764.82, stdev=59.08 00:19:00.490 clat percentiles (usec): 00:19:00.490 | 1.00th=[ 644], 5.00th=[ 668], 10.00th=[ 685], 20.00th=[ 701], 00:19:00.490 | 30.00th=[ 717], 40.00th=[ 734], 50.00th=[ 742], 60.00th=[ 758], 00:19:00.490 | 70.00th=[ 783], 80.00th=[ 799], 90.00th=[ 832], 95.00th=[ 857], 00:19:00.490 | 99.00th=[ 898], 99.50th=[ 914], 99.90th=[ 955], 99.95th=[ 963], 00:19:00.490 | 99.99th=[ 988] 00:19:00.490 bw ( KiB/s): min=19847, max=20854, per=50.04%, avg=20274.37, stdev=270.43, samples=19 00:19:00.490 iops : min= 4961, max= 5213, avg=5068.53, stdev=67.62, samples=19 00:19:00.490 lat (usec) : 500=0.01%, 750=53.51%, 1000=46.47% 00:19:00.490 lat (msec) : 2=0.01% 00:19:00.490 cpu : usr=91.03%, sys=7.61%, ctx=30, majf=0, minf=0 00:19:00.490 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:00.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.490 issued rwts: total=50652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:00.490 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:00.490 00:19:00.490 Run status group 0 (all jobs): 00:19:00.490 READ: bw=39.6MiB/s (41.5MB/s), 19.8MiB/s-19.8MiB/s (20.7MB/s-20.7MB/s), io=396MiB (415MB), run=10001-10001msec 00:19:00.750 05:59:22 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:19:00.750 05:59:22 -- target/dif.sh@43 -- # local sub 00:19:00.750 05:59:22 -- target/dif.sh@45 -- # for sub in "$@" 00:19:00.750 05:59:22 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:00.750 05:59:22 -- target/dif.sh@36 -- # local sub_id=0 00:19:00.750 05:59:22 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:00.750 05:59:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.750 05:59:22 -- common/autotest_common.sh@10 -- # set +x 00:19:00.750 05:59:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.750 05:59:22 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:00.750 05:59:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.750 05:59:22 -- common/autotest_common.sh@10 -- # set +x 00:19:00.750 05:59:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.750 05:59:22 -- target/dif.sh@45 -- # for sub in "$@" 00:19:00.750 05:59:22 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:00.750 05:59:22 -- target/dif.sh@36 -- # local sub_id=1 00:19:00.750 05:59:22 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:00.750 05:59:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.750 05:59:22 -- common/autotest_common.sh@10 -- # set +x 00:19:00.750 05:59:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.750 05:59:22 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:00.750 05:59:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.750 05:59:22 -- common/autotest_common.sh@10 -- # set +x 00:19:00.750 ************************************ 00:19:00.750 END TEST fio_dif_1_multi_subsystems 00:19:00.750 ************************************ 00:19:00.750 05:59:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.750 00:19:00.750 real 0m10.969s 00:19:00.750 user 0m18.838s 00:19:00.750 sys 0m1.808s 00:19:00.750 05:59:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:00.750 05:59:22 -- common/autotest_common.sh@10 -- # set +x 00:19:00.750 05:59:22 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:19:00.750 05:59:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:00.750 05:59:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:00.750 05:59:22 -- common/autotest_common.sh@10 -- # set +x 00:19:00.750 ************************************ 00:19:00.750 START TEST fio_dif_rand_params 00:19:00.750 ************************************ 00:19:00.750 05:59:22 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:19:00.750 05:59:22 -- target/dif.sh@100 -- # local NULL_DIF 00:19:00.750 05:59:22 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:19:00.750 05:59:22 -- target/dif.sh@103 -- # NULL_DIF=3 00:19:00.750 05:59:22 -- target/dif.sh@103 -- # bs=128k 00:19:00.750 05:59:22 -- target/dif.sh@103 -- # numjobs=3 00:19:00.750 05:59:22 -- target/dif.sh@103 -- # iodepth=3 00:19:00.750 05:59:22 -- target/dif.sh@103 -- # runtime=5 00:19:00.750 05:59:22 -- target/dif.sh@105 -- # create_subsystems 0 00:19:00.750 05:59:22 -- target/dif.sh@28 -- # local sub 00:19:00.750 05:59:22 -- target/dif.sh@30 -- # for sub in "$@" 00:19:00.750 05:59:22 -- target/dif.sh@31 -- # create_subsystem 0 00:19:00.750 05:59:22 -- target/dif.sh@18 -- # local sub_id=0 00:19:00.750 05:59:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:00.750 05:59:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.750 05:59:22 -- common/autotest_common.sh@10 -- # set +x 00:19:00.750 bdev_null0 00:19:00.750 05:59:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.750 05:59:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:00.750 05:59:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.750 05:59:22 -- common/autotest_common.sh@10 -- # set +x 00:19:00.750 05:59:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.750 05:59:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:00.750 05:59:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.750 05:59:22 -- common/autotest_common.sh@10 -- # set +x 00:19:00.750 05:59:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.750 05:59:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:00.750 05:59:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.750 05:59:22 -- common/autotest_common.sh@10 -- # set +x 00:19:00.750 [2024-12-15 05:59:22.309740] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:00.750 05:59:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.750 05:59:22 -- target/dif.sh@106 -- # fio /dev/fd/62 00:19:00.750 05:59:22 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:19:00.750 05:59:22 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:00.750 05:59:22 -- nvmf/common.sh@520 -- # config=() 00:19:00.750 05:59:22 -- nvmf/common.sh@520 -- # local subsystem config 00:19:00.750 05:59:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:00.750 05:59:22 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:00.750 05:59:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:00.750 { 00:19:00.750 "params": { 00:19:00.750 "name": "Nvme$subsystem", 00:19:00.750 "trtype": "$TEST_TRANSPORT", 00:19:00.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:00.750 "adrfam": "ipv4", 00:19:00.750 "trsvcid": "$NVMF_PORT", 00:19:00.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:00.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:00.750 "hdgst": ${hdgst:-false}, 00:19:00.750 "ddgst": ${ddgst:-false} 00:19:00.750 }, 00:19:00.750 "method": "bdev_nvme_attach_controller" 00:19:00.750 } 00:19:00.750 EOF 00:19:00.750 )") 00:19:00.750 05:59:22 -- target/dif.sh@82 -- # gen_fio_conf 00:19:00.750 05:59:22 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:00.750 05:59:22 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:00.750 05:59:22 -- target/dif.sh@54 -- # local file 00:19:00.750 05:59:22 -- target/dif.sh@56 -- # cat 00:19:00.750 05:59:22 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:00.750 05:59:22 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:00.750 05:59:22 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:00.750 05:59:22 -- common/autotest_common.sh@1330 -- # shift 00:19:00.750 05:59:22 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:00.750 05:59:22 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:00.750 05:59:22 -- nvmf/common.sh@542 -- # cat 00:19:00.750 05:59:22 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:00.750 05:59:22 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:00.750 05:59:22 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:00.750 05:59:22 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:00.750 05:59:22 -- nvmf/common.sh@544 -- # jq . 00:19:00.750 05:59:22 -- target/dif.sh@72 -- # (( file <= files )) 00:19:00.750 05:59:22 -- nvmf/common.sh@545 -- # IFS=, 00:19:00.750 05:59:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:00.750 "params": { 00:19:00.750 "name": "Nvme0", 00:19:00.750 "trtype": "tcp", 00:19:00.750 "traddr": "10.0.0.2", 00:19:00.750 "adrfam": "ipv4", 00:19:00.750 "trsvcid": "4420", 00:19:00.750 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:00.750 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:00.750 "hdgst": false, 00:19:00.750 "ddgst": false 00:19:00.750 }, 00:19:00.750 "method": "bdev_nvme_attach_controller" 00:19:00.750 }' 00:19:00.751 05:59:22 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:00.751 05:59:22 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:00.751 05:59:22 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:00.751 05:59:22 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:00.751 05:59:22 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:00.751 05:59:22 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:00.751 05:59:22 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:00.751 05:59:22 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:00.751 05:59:22 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:00.751 05:59:22 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:01.010 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:01.010 ... 00:19:01.010 fio-3.35 00:19:01.010 Starting 3 threads 00:19:01.268 [2024-12-15 05:59:22.844955] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:01.269 [2024-12-15 05:59:22.845298] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:06.539 00:19:06.539 filename0: (groupid=0, jobs=1): err= 0: pid=86420: Sun Dec 15 05:59:27 2024 00:19:06.539 read: IOPS=273, BW=34.2MiB/s (35.8MB/s)(171MiB/5007msec) 00:19:06.539 slat (nsec): min=6853, max=54084, avg=14295.92, stdev=5638.95 00:19:06.539 clat (usec): min=10073, max=17479, avg=10948.57, stdev=543.19 00:19:06.539 lat (usec): min=10087, max=17501, avg=10962.87, stdev=543.34 00:19:06.539 clat percentiles (usec): 00:19:06.539 | 1.00th=[10159], 5.00th=[10290], 10.00th=[10421], 20.00th=[10552], 00:19:06.539 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10814], 60.00th=[10945], 00:19:06.540 | 70.00th=[11076], 80.00th=[11207], 90.00th=[11731], 95.00th=[11863], 00:19:06.540 | 99.00th=[12125], 99.50th=[12125], 99.90th=[17433], 99.95th=[17433], 00:19:06.540 | 99.99th=[17433] 00:19:06.540 bw ( KiB/s): min=34491, max=36096, per=33.30%, avg=34937.10, stdev=657.53, samples=10 00:19:06.540 iops : min= 269, max= 282, avg=272.90, stdev= 5.17, samples=10 00:19:06.540 lat (msec) : 20=100.00% 00:19:06.540 cpu : usr=92.49%, sys=7.03%, ctx=4, majf=0, minf=8 00:19:06.540 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.540 issued rwts: total=1368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.540 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:06.540 filename0: (groupid=0, jobs=1): err= 0: pid=86421: Sun Dec 15 05:59:27 2024 00:19:06.540 read: IOPS=273, BW=34.2MiB/s (35.8MB/s)(171MiB/5003msec) 00:19:06.540 slat (nsec): min=6826, max=52853, avg=15854.09, stdev=5411.03 00:19:06.540 clat (usec): min=10083, max=13571, avg=10934.87, stdev=464.35 00:19:06.540 lat (usec): min=10096, max=13596, avg=10950.72, stdev=465.07 00:19:06.540 clat percentiles (usec): 00:19:06.540 | 1.00th=[10159], 5.00th=[10290], 10.00th=[10421], 20.00th=[10552], 00:19:06.540 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10814], 60.00th=[10945], 00:19:06.540 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[11863], 00:19:06.540 | 99.00th=[11994], 99.50th=[12125], 99.90th=[13566], 99.95th=[13566], 00:19:06.540 | 99.99th=[13566] 00:19:06.540 bw ( KiB/s): min=34560, max=36096, per=33.35%, avg=34986.67, stdev=557.94, samples=9 00:19:06.540 iops : min= 270, max= 282, avg=273.33, stdev= 4.36, samples=9 00:19:06.540 lat (msec) : 20=100.00% 00:19:06.540 cpu : usr=91.12%, sys=8.30%, ctx=15, majf=0, minf=9 00:19:06.540 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.540 issued rwts: total=1368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.540 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:06.540 filename0: (groupid=0, jobs=1): err= 0: pid=86422: Sun Dec 15 05:59:27 2024 00:19:06.540 read: IOPS=273, BW=34.2MiB/s (35.8MB/s)(171MiB/5007msec) 00:19:06.540 slat (nsec): min=5631, max=54903, avg=15836.15, stdev=5464.82 00:19:06.540 clat (usec): min=10077, max=17154, avg=10942.63, stdev=534.47 00:19:06.540 lat (usec): min=10090, max=17172, avg=10958.47, stdev=534.96 00:19:06.540 clat percentiles (usec): 00:19:06.540 | 1.00th=[10159], 5.00th=[10290], 10.00th=[10421], 20.00th=[10552], 00:19:06.540 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10814], 60.00th=[10945], 00:19:06.540 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[11863], 00:19:06.540 | 99.00th=[11994], 99.50th=[12125], 99.90th=[17171], 99.95th=[17171], 00:19:06.540 | 99.99th=[17171] 00:19:06.540 bw ( KiB/s): min=34560, max=36096, per=33.31%, avg=34944.00, stdev=652.67, samples=10 00:19:06.540 iops : min= 270, max= 282, avg=273.00, stdev= 5.10, samples=10 00:19:06.540 lat (msec) : 20=100.00% 00:19:06.540 cpu : usr=92.13%, sys=6.97%, ctx=39, majf=0, minf=9 00:19:06.540 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.540 issued rwts: total=1368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.540 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:06.540 00:19:06.540 Run status group 0 (all jobs): 00:19:06.540 READ: bw=102MiB/s (107MB/s), 34.2MiB/s-34.2MiB/s (35.8MB/s-35.8MB/s), io=513MiB (538MB), run=5003-5007msec 00:19:06.540 05:59:28 -- target/dif.sh@107 -- # destroy_subsystems 0 00:19:06.540 05:59:28 -- target/dif.sh@43 -- # local sub 00:19:06.540 05:59:28 -- target/dif.sh@45 -- # for sub in "$@" 00:19:06.540 05:59:28 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:06.540 05:59:28 -- target/dif.sh@36 -- # local sub_id=0 00:19:06.540 05:59:28 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:06.540 05:59:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.540 05:59:28 -- common/autotest_common.sh@10 -- # set +x 00:19:06.540 05:59:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.540 05:59:28 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:06.540 05:59:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.540 05:59:28 -- common/autotest_common.sh@10 -- # set +x 00:19:06.540 05:59:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.540 05:59:28 -- target/dif.sh@109 -- # NULL_DIF=2 00:19:06.540 05:59:28 -- target/dif.sh@109 -- # bs=4k 00:19:06.540 05:59:28 -- target/dif.sh@109 -- # numjobs=8 00:19:06.540 05:59:28 -- target/dif.sh@109 -- # iodepth=16 00:19:06.540 05:59:28 -- target/dif.sh@109 -- # runtime= 00:19:06.540 05:59:28 -- target/dif.sh@109 -- # files=2 00:19:06.540 05:59:28 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:19:06.540 05:59:28 -- target/dif.sh@28 -- # local sub 00:19:06.540 05:59:28 -- target/dif.sh@30 -- # for sub in "$@" 00:19:06.540 05:59:28 -- target/dif.sh@31 -- # create_subsystem 0 00:19:06.540 05:59:28 -- target/dif.sh@18 -- # local sub_id=0 00:19:06.540 05:59:28 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:19:06.540 05:59:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.540 05:59:28 -- common/autotest_common.sh@10 -- # set +x 00:19:06.540 bdev_null0 00:19:06.540 05:59:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.540 05:59:28 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:06.540 05:59:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.540 05:59:28 -- common/autotest_common.sh@10 -- # set +x 00:19:06.540 05:59:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.540 05:59:28 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:06.540 05:59:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.540 05:59:28 -- common/autotest_common.sh@10 -- # set +x 00:19:06.540 05:59:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.540 05:59:28 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:06.540 05:59:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.540 05:59:28 -- common/autotest_common.sh@10 -- # set +x 00:19:06.540 [2024-12-15 05:59:28.174321] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:06.799 05:59:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.799 05:59:28 -- target/dif.sh@30 -- # for sub in "$@" 00:19:06.799 05:59:28 -- target/dif.sh@31 -- # create_subsystem 1 00:19:06.799 05:59:28 -- target/dif.sh@18 -- # local sub_id=1 00:19:06.799 05:59:28 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:19:06.799 05:59:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.799 05:59:28 -- common/autotest_common.sh@10 -- # set +x 00:19:06.799 bdev_null1 00:19:06.799 05:59:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.799 05:59:28 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:06.799 05:59:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.800 05:59:28 -- common/autotest_common.sh@10 -- # set +x 00:19:06.800 05:59:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.800 05:59:28 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:06.800 05:59:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.800 05:59:28 -- common/autotest_common.sh@10 -- # set +x 00:19:06.800 05:59:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.800 05:59:28 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:06.800 05:59:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.800 05:59:28 -- common/autotest_common.sh@10 -- # set +x 00:19:06.800 05:59:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.800 05:59:28 -- target/dif.sh@30 -- # for sub in "$@" 00:19:06.800 05:59:28 -- target/dif.sh@31 -- # create_subsystem 2 00:19:06.800 05:59:28 -- target/dif.sh@18 -- # local sub_id=2 00:19:06.800 05:59:28 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:19:06.800 05:59:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.800 05:59:28 -- common/autotest_common.sh@10 -- # set +x 00:19:06.800 bdev_null2 00:19:06.800 05:59:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.800 05:59:28 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:19:06.800 05:59:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.800 05:59:28 -- common/autotest_common.sh@10 -- # set +x 00:19:06.800 05:59:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.800 05:59:28 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:19:06.800 05:59:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.800 05:59:28 -- common/autotest_common.sh@10 -- # set +x 00:19:06.800 05:59:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.800 05:59:28 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:06.800 05:59:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.800 05:59:28 -- common/autotest_common.sh@10 -- # set +x 00:19:06.800 05:59:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.800 05:59:28 -- target/dif.sh@112 -- # fio /dev/fd/62 00:19:06.800 05:59:28 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:19:06.800 05:59:28 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:19:06.800 05:59:28 -- nvmf/common.sh@520 -- # config=() 00:19:06.800 05:59:28 -- nvmf/common.sh@520 -- # local subsystem config 00:19:06.800 05:59:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:06.800 05:59:28 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:06.800 05:59:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:06.800 { 00:19:06.800 "params": { 00:19:06.800 "name": "Nvme$subsystem", 00:19:06.800 "trtype": "$TEST_TRANSPORT", 00:19:06.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:06.800 "adrfam": "ipv4", 00:19:06.800 "trsvcid": "$NVMF_PORT", 00:19:06.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:06.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:06.800 "hdgst": ${hdgst:-false}, 00:19:06.800 "ddgst": ${ddgst:-false} 00:19:06.800 }, 00:19:06.800 "method": "bdev_nvme_attach_controller" 00:19:06.800 } 00:19:06.800 EOF 00:19:06.800 )") 00:19:06.800 05:59:28 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:06.800 05:59:28 -- target/dif.sh@82 -- # gen_fio_conf 00:19:06.800 05:59:28 -- target/dif.sh@54 -- # local file 00:19:06.800 05:59:28 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:06.800 05:59:28 -- target/dif.sh@56 -- # cat 00:19:06.800 05:59:28 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:06.800 05:59:28 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:06.800 05:59:28 -- nvmf/common.sh@542 -- # cat 00:19:06.800 05:59:28 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:06.800 05:59:28 -- common/autotest_common.sh@1330 -- # shift 00:19:06.800 05:59:28 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:06.800 05:59:28 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:06.800 05:59:28 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:06.800 05:59:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:06.800 05:59:28 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:06.800 05:59:28 -- target/dif.sh@72 -- # (( file <= files )) 00:19:06.800 05:59:28 -- target/dif.sh@73 -- # cat 00:19:06.800 05:59:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:06.800 { 00:19:06.800 "params": { 00:19:06.800 "name": "Nvme$subsystem", 00:19:06.800 "trtype": "$TEST_TRANSPORT", 00:19:06.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:06.800 "adrfam": "ipv4", 00:19:06.800 "trsvcid": "$NVMF_PORT", 00:19:06.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:06.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:06.800 "hdgst": ${hdgst:-false}, 00:19:06.800 "ddgst": ${ddgst:-false} 00:19:06.800 }, 00:19:06.800 "method": "bdev_nvme_attach_controller" 00:19:06.800 } 00:19:06.800 EOF 00:19:06.800 )") 00:19:06.800 05:59:28 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:06.800 05:59:28 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:06.800 05:59:28 -- nvmf/common.sh@542 -- # cat 00:19:06.800 05:59:28 -- target/dif.sh@72 -- # (( file++ )) 00:19:06.800 05:59:28 -- target/dif.sh@72 -- # (( file <= files )) 00:19:06.800 05:59:28 -- target/dif.sh@73 -- # cat 00:19:06.800 05:59:28 -- target/dif.sh@72 -- # (( file++ )) 00:19:06.800 05:59:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:06.800 05:59:28 -- target/dif.sh@72 -- # (( file <= files )) 00:19:06.800 05:59:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:06.800 { 00:19:06.800 "params": { 00:19:06.800 "name": "Nvme$subsystem", 00:19:06.800 "trtype": "$TEST_TRANSPORT", 00:19:06.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:06.800 "adrfam": "ipv4", 00:19:06.800 "trsvcid": "$NVMF_PORT", 00:19:06.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:06.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:06.800 "hdgst": ${hdgst:-false}, 00:19:06.800 "ddgst": ${ddgst:-false} 00:19:06.800 }, 00:19:06.800 "method": "bdev_nvme_attach_controller" 00:19:06.800 } 00:19:06.800 EOF 00:19:06.800 )") 00:19:06.800 05:59:28 -- nvmf/common.sh@542 -- # cat 00:19:06.800 05:59:28 -- nvmf/common.sh@544 -- # jq . 00:19:06.800 05:59:28 -- nvmf/common.sh@545 -- # IFS=, 00:19:06.800 05:59:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:06.800 "params": { 00:19:06.800 "name": "Nvme0", 00:19:06.800 "trtype": "tcp", 00:19:06.800 "traddr": "10.0.0.2", 00:19:06.800 "adrfam": "ipv4", 00:19:06.800 "trsvcid": "4420", 00:19:06.800 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:06.800 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:06.800 "hdgst": false, 00:19:06.800 "ddgst": false 00:19:06.800 }, 00:19:06.800 "method": "bdev_nvme_attach_controller" 00:19:06.800 },{ 00:19:06.800 "params": { 00:19:06.800 "name": "Nvme1", 00:19:06.800 "trtype": "tcp", 00:19:06.800 "traddr": "10.0.0.2", 00:19:06.800 "adrfam": "ipv4", 00:19:06.800 "trsvcid": "4420", 00:19:06.800 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.800 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:06.800 "hdgst": false, 00:19:06.800 "ddgst": false 00:19:06.800 }, 00:19:06.800 "method": "bdev_nvme_attach_controller" 00:19:06.800 },{ 00:19:06.800 "params": { 00:19:06.800 "name": "Nvme2", 00:19:06.800 "trtype": "tcp", 00:19:06.800 "traddr": "10.0.0.2", 00:19:06.800 "adrfam": "ipv4", 00:19:06.800 "trsvcid": "4420", 00:19:06.800 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:06.800 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:06.800 "hdgst": false, 00:19:06.800 "ddgst": false 00:19:06.800 }, 00:19:06.800 "method": "bdev_nvme_attach_controller" 00:19:06.800 }' 00:19:06.800 05:59:28 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:06.800 05:59:28 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:06.800 05:59:28 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:06.800 05:59:28 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:06.800 05:59:28 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:06.800 05:59:28 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:06.800 05:59:28 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:06.800 05:59:28 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:06.800 05:59:28 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:06.800 05:59:28 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:07.059 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:07.059 ... 00:19:07.059 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:07.059 ... 00:19:07.059 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:07.059 ... 00:19:07.059 fio-3.35 00:19:07.059 Starting 24 threads 00:19:07.366 [2024-12-15 05:59:28.922978] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:07.366 [2024-12-15 05:59:28.923059] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:19.572 00:19:19.572 filename0: (groupid=0, jobs=1): err= 0: pid=86521: Sun Dec 15 05:59:39 2024 00:19:19.572 read: IOPS=227, BW=910KiB/s (931kB/s)(9132KiB/10039msec) 00:19:19.572 slat (usec): min=6, max=8024, avg=24.80, stdev=259.25 00:19:19.572 clat (msec): min=19, max=143, avg=70.17, stdev=18.35 00:19:19.572 lat (msec): min=19, max=143, avg=70.20, stdev=18.35 00:19:19.572 clat percentiles (msec): 00:19:19.572 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 51], 00:19:19.572 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 72], 00:19:19.572 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 107], 00:19:19.572 | 99.00th=[ 109], 99.50th=[ 121], 99.90th=[ 133], 99.95th=[ 133], 00:19:19.572 | 99.99th=[ 144] 00:19:19.572 bw ( KiB/s): min= 624, max= 1024, per=4.09%, avg=906.80, stdev=102.95, samples=20 00:19:19.572 iops : min= 156, max= 256, avg=226.70, stdev=25.74, samples=20 00:19:19.572 lat (msec) : 20=0.70%, 50=19.14%, 100=73.63%, 250=6.53% 00:19:19.572 cpu : usr=31.80%, sys=1.40%, ctx=1000, majf=0, minf=9 00:19:19.572 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=80.6%, 16=16.4%, 32=0.0%, >=64=0.0% 00:19:19.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.572 complete : 0=0.0%, 4=88.1%, 8=11.4%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.572 issued rwts: total=2283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.572 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:19.572 filename0: (groupid=0, jobs=1): err= 0: pid=86522: Sun Dec 15 05:59:39 2024 00:19:19.572 read: IOPS=229, BW=916KiB/s (938kB/s)(9200KiB/10041msec) 00:19:19.572 slat (usec): min=7, max=4032, avg=17.62, stdev=115.79 00:19:19.572 clat (msec): min=15, max=131, avg=69.74, stdev=17.47 00:19:19.572 lat (msec): min=15, max=131, avg=69.75, stdev=17.47 00:19:19.572 clat percentiles (msec): 00:19:19.572 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 54], 00:19:19.572 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 72], 00:19:19.572 | 70.00th=[ 78], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 100], 00:19:19.572 | 99.00th=[ 109], 99.50th=[ 111], 99.90th=[ 130], 99.95th=[ 132], 00:19:19.572 | 99.99th=[ 132] 00:19:19.572 bw ( KiB/s): min= 720, max= 1024, per=4.12%, avg=913.60, stdev=87.04, samples=20 00:19:19.572 iops : min= 180, max= 256, avg=228.40, stdev=21.76, samples=20 00:19:19.572 lat (msec) : 20=0.70%, 50=16.13%, 100=78.91%, 250=4.26% 00:19:19.572 cpu : usr=38.81%, sys=2.02%, ctx=1316, majf=0, minf=9 00:19:19.572 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=81.6%, 16=16.7%, 32=0.0%, >=64=0.0% 00:19:19.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.572 complete : 0=0.0%, 4=88.0%, 8=11.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.572 issued rwts: total=2300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.572 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:19.572 filename0: (groupid=0, jobs=1): err= 0: pid=86523: Sun Dec 15 05:59:39 2024 00:19:19.572 read: IOPS=216, BW=867KiB/s (888kB/s)(8712KiB/10049msec) 00:19:19.572 slat (usec): min=3, max=8022, avg=20.55, stdev=206.98 00:19:19.572 clat (msec): min=11, max=135, avg=73.66, stdev=19.34 00:19:19.572 lat (msec): min=11, max=135, avg=73.68, stdev=19.34 00:19:19.572 clat percentiles (msec): 00:19:19.572 | 1.00th=[ 24], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 61], 00:19:19.572 | 30.00th=[ 65], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:19:19.572 | 70.00th=[ 81], 80.00th=[ 91], 90.00th=[ 97], 95.00th=[ 106], 00:19:19.572 | 99.00th=[ 131], 99.50th=[ 136], 99.90th=[ 136], 99.95th=[ 136], 00:19:19.572 | 99.99th=[ 136] 00:19:19.572 bw ( KiB/s): min= 542, max= 1154, per=3.90%, avg=864.80, stdev=137.42, samples=20 00:19:19.572 iops : min= 135, max= 288, avg=216.15, stdev=34.36, samples=20 00:19:19.572 lat (msec) : 20=0.73%, 50=11.39%, 100=80.07%, 250=7.81% 00:19:19.572 cpu : usr=40.98%, sys=2.26%, ctx=1296, majf=0, minf=9 00:19:19.572 IO depths : 1=0.1%, 2=2.1%, 4=8.3%, 8=73.8%, 16=15.7%, 32=0.0%, >=64=0.0% 00:19:19.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.573 complete : 0=0.0%, 4=90.0%, 8=8.2%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.573 issued rwts: total=2178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.573 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:19.573 filename0: (groupid=0, jobs=1): err= 0: pid=86524: Sun Dec 15 05:59:39 2024 00:19:19.573 read: IOPS=233, BW=934KiB/s (956kB/s)(9352KiB/10017msec) 00:19:19.573 slat (usec): min=7, max=8026, avg=28.31, stdev=331.09 00:19:19.573 clat (msec): min=23, max=131, avg=68.44, stdev=18.19 00:19:19.573 lat (msec): min=23, max=131, avg=68.47, stdev=18.19 00:19:19.573 clat percentiles (msec): 00:19:19.573 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 48], 00:19:19.573 | 30.00th=[ 60], 40.00th=[ 62], 50.00th=[ 71], 60.00th=[ 72], 00:19:19.573 | 70.00th=[ 73], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 103], 00:19:19.573 | 99.00th=[ 117], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 132], 00:19:19.573 | 99.99th=[ 132] 00:19:19.573 bw ( KiB/s): min= 704, max= 1024, per=4.19%, avg=928.50, stdev=92.14, samples=20 00:19:19.573 iops : min= 176, max= 256, avg=232.10, stdev=23.05, samples=20 00:19:19.573 lat (msec) : 50=23.95%, 100=71.00%, 250=5.05% 00:19:19.573 cpu : usr=32.64%, sys=1.95%, ctx=886, majf=0, minf=9 00:19:19.573 IO depths : 1=0.1%, 2=0.5%, 4=2.1%, 8=81.2%, 16=16.1%, 32=0.0%, >=64=0.0% 00:19:19.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.573 complete : 0=0.0%, 4=87.8%, 8=11.7%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.573 issued rwts: total=2338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.573 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:19.573 filename0: (groupid=0, jobs=1): err= 0: pid=86525: Sun Dec 15 05:59:39 2024 00:19:19.573 read: IOPS=235, BW=943KiB/s (965kB/s)(9464KiB/10039msec) 00:19:19.573 slat (usec): min=7, max=8029, avg=21.25, stdev=213.74 00:19:19.573 clat (msec): min=15, max=121, avg=67.75, stdev=18.23 00:19:19.573 lat (msec): min=15, max=121, avg=67.77, stdev=18.24 00:19:19.573 clat percentiles (msec): 00:19:19.573 | 1.00th=[ 32], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 50], 00:19:19.573 | 30.00th=[ 57], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 72], 00:19:19.573 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 101], 00:19:19.573 | 99.00th=[ 108], 99.50th=[ 112], 99.90th=[ 116], 99.95th=[ 121], 00:19:19.573 | 99.99th=[ 123] 00:19:19.573 bw ( KiB/s): min= 656, max= 1264, per=4.24%, avg=940.00, stdev=122.09, samples=20 00:19:19.573 iops : min= 164, max= 316, avg=235.00, stdev=30.52, samples=20 00:19:19.573 lat (msec) : 20=0.68%, 50=22.70%, 100=71.43%, 250=5.20% 00:19:19.573 cpu : usr=37.96%, sys=2.06%, ctx=1093, majf=0, minf=9 00:19:19.573 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=82.6%, 16=16.5%, 32=0.0%, >=64=0.0% 00:19:19.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.573 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.573 issued rwts: total=2366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.573 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:19.573 filename0: (groupid=0, jobs=1): err= 0: pid=86526: Sun Dec 15 05:59:39 2024 00:19:19.573 read: IOPS=223, BW=895KiB/s (916kB/s)(8988KiB/10046msec) 00:19:19.573 slat (usec): min=7, max=8025, avg=25.73, stdev=270.47 00:19:19.573 clat (msec): min=23, max=141, avg=71.38, stdev=20.54 00:19:19.573 lat (msec): min=23, max=142, avg=71.41, stdev=20.54 00:19:19.573 clat percentiles (msec): 00:19:19.573 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 54], 00:19:19.573 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 72], 00:19:19.573 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 104], 95.00th=[ 111], 00:19:19.573 | 99.00th=[ 130], 99.50th=[ 130], 99.90th=[ 138], 99.95th=[ 138], 00:19:19.573 | 99.99th=[ 142] 00:19:19.573 bw ( KiB/s): min= 541, max= 1133, per=4.02%, avg=892.10, stdev=146.17, samples=20 00:19:19.573 iops : min= 135, max= 283, avg=223.00, stdev=36.55, samples=20 00:19:19.573 lat (msec) : 50=17.53%, 100=71.25%, 250=11.21% 00:19:19.573 cpu : usr=39.11%, sys=2.30%, ctx=1164, majf=0, minf=9 00:19:19.573 IO depths : 1=0.1%, 2=1.2%, 4=4.7%, 8=78.0%, 16=16.1%, 32=0.0%, >=64=0.0% 00:19:19.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.573 complete : 0=0.0%, 4=88.9%, 8=10.0%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.573 issued rwts: total=2247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.573 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:19.573 filename0: (groupid=0, jobs=1): err= 0: pid=86527: Sun Dec 15 05:59:39 2024 00:19:19.573 read: IOPS=232, BW=931KiB/s (953kB/s)(9324KiB/10014msec) 00:19:19.573 slat (usec): min=7, max=8029, avg=32.08, stdev=322.45 00:19:19.573 clat (msec): min=25, max=144, avg=68.61, stdev=20.31 00:19:19.573 lat (msec): min=25, max=144, avg=68.64, stdev=20.31 00:19:19.573 clat percentiles (msec): 00:19:19.573 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 48], 00:19:19.573 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 72], 00:19:19.573 | 70.00th=[ 74], 80.00th=[ 83], 90.00th=[ 99], 95.00th=[ 108], 00:19:19.573 | 99.00th=[ 126], 99.50th=[ 136], 99.90th=[ 142], 99.95th=[ 144], 00:19:19.573 | 99.99th=[ 144] 00:19:19.573 bw ( KiB/s): min= 528, max= 1048, per=4.19%, avg=928.70, stdev=141.66, samples=20 00:19:19.573 iops : min= 132, max= 262, avg=232.15, stdev=35.42, samples=20 00:19:19.573 lat (msec) : 50=23.08%, 100=68.34%, 250=8.58% 00:19:19.573 cpu : usr=38.52%, sys=2.50%, ctx=1387, majf=0, minf=9 00:19:19.573 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=79.5%, 16=15.5%, 32=0.0%, >=64=0.0% 00:19:19.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.573 complete : 0=0.0%, 4=88.0%, 8=11.1%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.573 issued rwts: total=2331,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.573 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:19.573 filename0: (groupid=0, jobs=1): err= 0: pid=86528: Sun Dec 15 05:59:39 2024 00:19:19.573 read: IOPS=236, BW=948KiB/s (971kB/s)(9492KiB/10013msec) 00:19:19.573 slat (usec): min=3, max=9024, avg=28.75, stdev=324.82 00:19:19.573 clat (msec): min=25, max=136, avg=67.34, stdev=19.29 00:19:19.573 lat (msec): min=25, max=136, avg=67.37, stdev=19.28 00:19:19.573 clat percentiles (msec): 00:19:19.573 | 1.00th=[ 36], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 48], 00:19:19.573 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 72], 00:19:19.573 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 107], 00:19:19.573 | 99.00th=[ 116], 99.50th=[ 121], 99.90th=[ 133], 99.95th=[ 138], 00:19:19.573 | 99.99th=[ 138] 00:19:19.573 bw ( KiB/s): min= 632, max= 1072, per=4.25%, avg=942.85, stdev=118.45, samples=20 00:19:19.573 iops : min= 158, max= 268, avg=235.70, stdev=29.62, samples=20 00:19:19.573 lat (msec) : 50=25.33%, 100=68.98%, 250=5.69% 00:19:19.573 cpu : usr=34.33%, sys=1.73%, ctx=1113, majf=0, minf=9 00:19:19.573 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.7%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:19.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.573 complete : 0=0.0%, 4=87.5%, 8=12.1%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.573 issued rwts: total=2373,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.573 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:19.573 filename1: (groupid=0, jobs=1): err= 0: pid=86529: Sun Dec 15 05:59:39 2024 00:19:19.573 read: IOPS=223, BW=895KiB/s (917kB/s)(9000KiB/10055msec) 00:19:19.573 slat (usec): min=3, max=8023, avg=29.21, stdev=306.96 00:19:19.573 clat (usec): min=1570, max=143791, avg=71281.70, stdev=21463.11 00:19:19.573 lat (usec): min=1578, max=143805, avg=71310.91, stdev=21465.12 00:19:19.573 clat percentiles (msec): 00:19:19.573 | 1.00th=[ 6], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 53], 00:19:19.573 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:19:19.573 | 70.00th=[ 81], 80.00th=[ 87], 90.00th=[ 96], 95.00th=[ 106], 00:19:19.573 | 99.00th=[ 138], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:19:19.573 | 99.99th=[ 144] 00:19:19.573 bw ( KiB/s): min= 528, max= 1280, per=4.03%, avg=893.50, stdev=159.53, samples=20 00:19:19.573 iops : min= 132, max= 320, avg=223.35, stdev=39.91, samples=20 00:19:19.573 lat (msec) : 2=0.71%, 10=0.71%, 20=1.42%, 50=15.82%, 100=74.76% 00:19:19.573 lat (msec) : 250=6.58% 00:19:19.573 cpu : usr=36.34%, sys=2.01%, ctx=1582, majf=0, minf=9 00:19:19.573 IO depths : 1=0.2%, 2=1.9%, 4=7.0%, 8=75.3%, 16=15.6%, 32=0.0%, >=64=0.0% 00:19:19.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.573 complete : 0=0.0%, 4=89.5%, 8=9.0%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.573 issued rwts: total=2250,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.573 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:19.573 filename1: (groupid=0, jobs=1): err= 0: pid=86530: Sun Dec 15 05:59:39 2024 00:19:19.573 read: IOPS=237, BW=950KiB/s (973kB/s)(9504KiB/10006msec) 00:19:19.573 slat (usec): min=4, max=8032, avg=24.81, stdev=284.64 00:19:19.573 clat (msec): min=25, max=132, avg=67.27, stdev=18.51 00:19:19.573 lat (msec): min=25, max=132, avg=67.29, stdev=18.51 00:19:19.573 clat percentiles (msec): 00:19:19.573 | 1.00th=[ 36], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 48], 00:19:19.573 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 72], 00:19:19.573 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 105], 00:19:19.573 | 99.00th=[ 110], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 133], 00:19:19.573 | 99.99th=[ 133] 00:19:19.573 bw ( KiB/s): min= 633, max= 1080, per=4.27%, avg=946.63, stdev=110.16, samples=19 00:19:19.573 iops : min= 158, max= 270, avg=236.63, stdev=27.58, samples=19 00:19:19.573 lat (msec) : 50=25.72%, 100=68.01%, 250=6.27% 00:19:19.573 cpu : usr=33.68%, sys=1.83%, ctx=915, majf=0, minf=9 00:19:19.573 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.7%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:19.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.573 complete : 0=0.0%, 4=87.5%, 8=12.1%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.573 issued rwts: total=2376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.573 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:19.573 filename1: (groupid=0, jobs=1): err= 0: pid=86531: Sun Dec 15 05:59:39 2024 00:19:19.573 read: IOPS=231, BW=925KiB/s (947kB/s)(9256KiB/10006msec) 00:19:19.573 slat (usec): min=4, max=12024, avg=28.26, stdev=332.90 00:19:19.573 clat (msec): min=25, max=143, avg=69.06, stdev=19.72 00:19:19.573 lat (msec): min=25, max=143, avg=69.09, stdev=19.72 00:19:19.573 clat percentiles (msec): 00:19:19.573 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 48], 00:19:19.573 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 70], 60.00th=[ 72], 00:19:19.573 | 70.00th=[ 74], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 107], 00:19:19.573 | 99.00th=[ 123], 99.50th=[ 130], 99.90th=[ 133], 99.95th=[ 144], 00:19:19.573 | 99.99th=[ 144] 00:19:19.574 bw ( KiB/s): min= 640, max= 1048, per=4.13%, avg=915.05, stdev=136.69, samples=19 00:19:19.574 iops : min= 160, max= 262, avg=228.74, stdev=34.20, samples=19 00:19:19.574 lat (msec) : 50=24.81%, 100=68.84%, 250=6.35% 00:19:19.574 cpu : usr=33.94%, sys=1.92%, ctx=949, majf=0, minf=9 00:19:19.574 IO depths : 1=0.1%, 2=1.3%, 4=5.4%, 8=78.0%, 16=15.3%, 32=0.0%, >=64=0.0% 00:19:19.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.574 complete : 0=0.0%, 4=88.4%, 8=10.4%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.574 issued rwts: total=2314,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.574 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:19.574 filename1: (groupid=0, jobs=1): err= 0: pid=86532: Sun Dec 15 05:59:39 2024 00:19:19.574 read: IOPS=222, BW=891KiB/s (913kB/s)(8948KiB/10040msec) 00:19:19.574 slat (usec): min=3, max=9023, avg=35.95, stdev=416.80 00:19:19.574 clat (msec): min=15, max=144, avg=71.62, stdev=19.45 00:19:19.574 lat (msec): min=15, max=144, avg=71.65, stdev=19.46 00:19:19.574 clat percentiles (msec): 00:19:19.574 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 57], 00:19:19.574 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 73], 00:19:19.574 | 70.00th=[ 82], 80.00th=[ 87], 90.00th=[ 96], 95.00th=[ 105], 00:19:19.574 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 144], 99.95th=[ 144], 00:19:19.574 | 99.99th=[ 144] 00:19:19.574 bw ( KiB/s): min= 528, max= 1136, per=4.01%, avg=888.40, stdev=137.92, samples=20 00:19:19.574 iops : min= 132, max= 284, avg=222.10, stdev=34.48, samples=20 00:19:19.574 lat (msec) : 20=0.72%, 50=17.08%, 100=75.68%, 250=6.53% 00:19:19.574 cpu : usr=33.73%, sys=1.88%, ctx=930, majf=0, minf=9 00:19:19.574 IO depths : 1=0.1%, 2=1.7%, 4=6.9%, 8=75.6%, 16=15.6%, 32=0.0%, >=64=0.0% 00:19:19.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.574 complete : 0=0.0%, 4=89.5%, 8=9.0%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.574 issued rwts: total=2237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.574 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:19.574 filename1: (groupid=0, jobs=1): err= 0: pid=86533: Sun Dec 15 05:59:39 2024 00:19:19.574 read: IOPS=227, BW=908KiB/s (930kB/s)(9104KiB/10023msec) 00:19:19.574 slat (usec): min=4, max=8040, avg=36.07, stdev=411.04 00:19:19.574 clat (msec): min=35, max=144, avg=70.30, stdev=19.09 00:19:19.574 lat (msec): min=35, max=144, avg=70.34, stdev=19.09 00:19:19.574 clat percentiles (msec): 00:19:19.574 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 50], 00:19:19.574 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 72], 00:19:19.574 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 108], 00:19:19.574 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:19:19.574 | 99.99th=[ 144] 00:19:19.574 bw ( KiB/s): min= 624, max= 1048, per=4.09%, avg=906.30, stdev=128.73, samples=20 00:19:19.574 iops : min= 156, max= 262, avg=226.55, stdev=32.24, samples=20 00:19:19.574 lat (msec) : 50=21.35%, 100=71.57%, 250=7.07% 00:19:19.574 cpu : usr=31.65%, sys=1.52%, ctx=995, majf=0, minf=9 00:19:19.574 IO depths : 1=0.1%, 2=1.1%, 4=4.2%, 8=78.8%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:19.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.574 complete : 0=0.0%, 4=88.5%, 8=10.6%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.574 issued rwts: total=2276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.574 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:19.574 filename1: (groupid=0, jobs=1): err= 0: pid=86534: Sun Dec 15 05:59:39 2024 00:19:19.574 read: IOPS=230, BW=921KiB/s (943kB/s)(9244KiB/10037msec) 00:19:19.574 slat (usec): min=6, max=8030, avg=24.59, stdev=288.70 00:19:19.574 clat (msec): min=32, max=140, avg=69.37, stdev=17.90 00:19:19.574 lat (msec): min=32, max=140, avg=69.39, stdev=17.90 00:19:19.574 clat percentiles (msec): 00:19:19.574 | 1.00th=[ 39], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 49], 00:19:19.574 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 72], 00:19:19.574 | 70.00th=[ 75], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 103], 00:19:19.574 | 99.00th=[ 113], 99.50th=[ 117], 99.90th=[ 132], 99.95th=[ 142], 00:19:19.574 | 99.99th=[ 142] 00:19:19.574 bw ( KiB/s): min= 744, max= 1072, per=4.14%, avg=918.05, stdev=92.84, samples=20 00:19:19.574 iops : min= 186, max= 268, avg=229.50, stdev=23.23, samples=20 00:19:19.574 lat (msec) : 50=21.85%, 100=72.74%, 250=5.41% 00:19:19.574 cpu : usr=37.36%, sys=1.86%, ctx=1021, majf=0, minf=9 00:19:19.574 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.3%, 16=16.4%, 32=0.0%, >=64=0.0% 00:19:19.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.574 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.574 issued rwts: total=2311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.574 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:19.574 filename1: (groupid=0, jobs=1): err= 0: pid=86535: Sun Dec 15 05:59:39 2024 00:19:19.574 read: IOPS=238, BW=953KiB/s (976kB/s)(9536KiB/10006msec) 00:19:19.574 slat (usec): min=7, max=8028, avg=24.74, stdev=232.07 00:19:19.574 clat (msec): min=27, max=119, avg=67.04, stdev=17.39 00:19:19.574 lat (msec): min=27, max=119, avg=67.06, stdev=17.38 00:19:19.574 clat percentiles (msec): 00:19:19.574 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 48], 00:19:19.574 | 30.00th=[ 56], 40.00th=[ 62], 50.00th=[ 68], 60.00th=[ 72], 00:19:19.574 | 70.00th=[ 74], 80.00th=[ 82], 90.00th=[ 93], 95.00th=[ 100], 00:19:19.574 | 99.00th=[ 109], 99.50th=[ 111], 99.90th=[ 114], 99.95th=[ 121], 00:19:19.574 | 99.99th=[ 121] 00:19:19.574 bw ( KiB/s): min= 712, max= 1048, per=4.28%, avg=949.47, stdev=83.85, samples=19 00:19:19.574 iops : min= 178, max= 262, avg=237.32, stdev=20.99, samples=19 00:19:19.574 lat (msec) : 50=23.78%, 100=71.81%, 250=4.40% 00:19:19.574 cpu : usr=40.91%, sys=1.97%, ctx=1260, majf=0, minf=10 00:19:19.574 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.8%, 16=16.1%, 32=0.0%, >=64=0.0% 00:19:19.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.574 complete : 0=0.0%, 4=87.3%, 8=12.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.574 issued rwts: total=2384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.574 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:19.574 filename1: (groupid=0, jobs=1): err= 0: pid=86536: Sun Dec 15 05:59:39 2024 00:19:19.574 read: IOPS=234, BW=939KiB/s (962kB/s)(9440KiB/10051msec) 00:19:19.574 slat (usec): min=3, max=8024, avg=31.86, stdev=329.26 00:19:19.574 clat (msec): min=12, max=134, avg=67.91, stdev=18.50 00:19:19.574 lat (msec): min=12, max=134, avg=67.94, stdev=18.49 00:19:19.574 clat percentiles (msec): 00:19:19.574 | 1.00th=[ 17], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 50], 00:19:19.574 | 30.00th=[ 59], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 72], 00:19:19.574 | 70.00th=[ 75], 80.00th=[ 83], 90.00th=[ 95], 95.00th=[ 103], 00:19:19.574 | 99.00th=[ 108], 99.50th=[ 111], 99.90th=[ 124], 99.95th=[ 129], 00:19:19.574 | 99.99th=[ 136] 00:19:19.574 bw ( KiB/s): min= 792, max= 1237, per=4.23%, avg=937.35, stdev=97.23, samples=20 00:19:19.574 iops : min= 198, max= 309, avg=234.30, stdev=24.29, samples=20 00:19:19.574 lat (msec) : 20=1.36%, 50=19.62%, 100=73.69%, 250=5.34% 00:19:19.574 cpu : usr=40.97%, sys=1.96%, ctx=1173, majf=0, minf=9 00:19:19.574 IO depths : 1=0.1%, 2=0.4%, 4=1.3%, 8=81.7%, 16=16.4%, 32=0.0%, >=64=0.0% 00:19:19.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.574 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.574 issued rwts: total=2360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.574 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:19.574 filename2: (groupid=0, jobs=1): err= 0: pid=86537: Sun Dec 15 05:59:39 2024 00:19:19.574 read: IOPS=247, BW=989KiB/s (1013kB/s)(9896KiB/10002msec) 00:19:19.574 slat (usec): min=4, max=4026, avg=19.64, stdev=127.37 00:19:19.574 clat (msec): min=2, max=128, avg=64.59, stdev=19.13 00:19:19.574 lat (msec): min=2, max=128, avg=64.61, stdev=19.12 00:19:19.574 clat percentiles (msec): 00:19:19.574 | 1.00th=[ 4], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 48], 00:19:19.574 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 70], 00:19:19.574 | 70.00th=[ 73], 80.00th=[ 80], 90.00th=[ 92], 95.00th=[ 97], 00:19:19.574 | 99.00th=[ 108], 99.50th=[ 108], 99.90th=[ 112], 99.95th=[ 129], 00:19:19.574 | 99.99th=[ 129] 00:19:19.574 bw ( KiB/s): min= 744, max= 1072, per=4.36%, avg=967.05, stdev=88.25, samples=19 00:19:19.574 iops : min= 186, max= 268, avg=241.74, stdev=22.07, samples=19 00:19:19.574 lat (msec) : 4=1.29%, 10=0.24%, 50=26.52%, 100=68.19%, 250=3.76% 00:19:19.574 cpu : usr=42.39%, sys=2.23%, ctx=1087, majf=0, minf=9 00:19:19.574 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=83.1%, 16=15.7%, 32=0.0%, >=64=0.0% 00:19:19.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.574 complete : 0=0.0%, 4=87.0%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.574 issued rwts: total=2474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.574 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:19.574 filename2: (groupid=0, jobs=1): err= 0: pid=86538: Sun Dec 15 05:59:39 2024 00:19:19.574 read: IOPS=231, BW=925KiB/s (947kB/s)(9292KiB/10048msec) 00:19:19.574 slat (usec): min=4, max=8020, avg=23.71, stdev=220.52 00:19:19.574 clat (usec): min=1348, max=144223, avg=69071.48, stdev=20052.56 00:19:19.574 lat (usec): min=1372, max=144237, avg=69095.19, stdev=20049.46 00:19:19.574 clat percentiles (msec): 00:19:19.574 | 1.00th=[ 7], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 52], 00:19:19.574 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 70], 60.00th=[ 72], 00:19:19.574 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 104], 00:19:19.574 | 99.00th=[ 114], 99.50th=[ 116], 99.90th=[ 136], 99.95th=[ 136], 00:19:19.574 | 99.99th=[ 144] 00:19:19.574 bw ( KiB/s): min= 678, max= 1296, per=4.16%, avg=922.70, stdev=129.75, samples=20 00:19:19.574 iops : min= 169, max= 324, avg=230.65, stdev=32.49, samples=20 00:19:19.574 lat (msec) : 2=0.09%, 4=0.69%, 10=0.69%, 20=0.69%, 50=16.27% 00:19:19.574 lat (msec) : 100=74.13%, 250=7.45% 00:19:19.574 cpu : usr=37.96%, sys=2.29%, ctx=1215, majf=0, minf=9 00:19:19.574 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=80.6%, 16=16.5%, 32=0.0%, >=64=0.0% 00:19:19.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.574 complete : 0=0.0%, 4=88.3%, 8=11.2%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.574 issued rwts: total=2323,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.574 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:19.574 filename2: (groupid=0, jobs=1): err= 0: pid=86539: Sun Dec 15 05:59:39 2024 00:19:19.574 read: IOPS=231, BW=925KiB/s (947kB/s)(9252KiB/10007msec) 00:19:19.574 slat (usec): min=4, max=8027, avg=35.88, stdev=407.60 00:19:19.574 clat (msec): min=7, max=143, avg=69.03, stdev=19.02 00:19:19.574 lat (msec): min=7, max=143, avg=69.07, stdev=19.02 00:19:19.574 clat percentiles (msec): 00:19:19.574 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 49], 00:19:19.574 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 72], 60.00th=[ 72], 00:19:19.574 | 70.00th=[ 73], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 106], 00:19:19.575 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 144], 00:19:19.575 | 99.99th=[ 144] 00:19:19.575 bw ( KiB/s): min= 544, max= 1048, per=4.13%, avg=916.26, stdev=126.18, samples=19 00:19:19.575 iops : min= 136, max= 262, avg=229.05, stdev=31.56, samples=19 00:19:19.575 lat (msec) : 10=0.26%, 20=0.13%, 50=22.31%, 100=71.73%, 250=5.58% 00:19:19.575 cpu : usr=31.55%, sys=1.61%, ctx=984, majf=0, minf=9 00:19:19.575 IO depths : 1=0.1%, 2=0.7%, 4=3.0%, 8=80.3%, 16=16.0%, 32=0.0%, >=64=0.0% 00:19:19.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.575 complete : 0=0.0%, 4=88.1%, 8=11.3%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.575 issued rwts: total=2313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.575 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:19.575 filename2: (groupid=0, jobs=1): err= 0: pid=86540: Sun Dec 15 05:59:39 2024 00:19:19.575 read: IOPS=218, BW=875KiB/s (897kB/s)(8776KiB/10024msec) 00:19:19.575 slat (usec): min=4, max=8022, avg=19.71, stdev=191.17 00:19:19.575 clat (msec): min=35, max=135, avg=72.95, stdev=19.14 00:19:19.575 lat (msec): min=35, max=135, avg=72.97, stdev=19.14 00:19:19.575 clat percentiles (msec): 00:19:19.575 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 56], 00:19:19.575 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:19:19.575 | 70.00th=[ 83], 80.00th=[ 88], 90.00th=[ 102], 95.00th=[ 108], 00:19:19.575 | 99.00th=[ 121], 99.50th=[ 129], 99.90th=[ 129], 99.95th=[ 136], 00:19:19.575 | 99.99th=[ 136] 00:19:19.575 bw ( KiB/s): min= 640, max= 1008, per=3.93%, avg=872.30, stdev=133.92, samples=20 00:19:19.575 iops : min= 160, max= 252, avg=218.05, stdev=33.52, samples=20 00:19:19.575 lat (msec) : 50=17.09%, 100=72.38%, 250=10.53% 00:19:19.575 cpu : usr=37.66%, sys=1.96%, ctx=1221, majf=0, minf=9 00:19:19.575 IO depths : 1=0.1%, 2=2.2%, 4=8.7%, 8=74.0%, 16=15.1%, 32=0.0%, >=64=0.0% 00:19:19.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.575 complete : 0=0.0%, 4=89.7%, 8=8.4%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.575 issued rwts: total=2194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.575 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:19.575 filename2: (groupid=0, jobs=1): err= 0: pid=86541: Sun Dec 15 05:59:39 2024 00:19:19.575 read: IOPS=232, BW=929KiB/s (951kB/s)(9320KiB/10037msec) 00:19:19.575 slat (usec): min=3, max=8029, avg=19.55, stdev=185.70 00:19:19.575 clat (msec): min=31, max=119, avg=68.78, stdev=17.82 00:19:19.575 lat (msec): min=31, max=119, avg=68.79, stdev=17.81 00:19:19.575 clat percentiles (msec): 00:19:19.575 | 1.00th=[ 38], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 48], 00:19:19.575 | 30.00th=[ 60], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 72], 00:19:19.575 | 70.00th=[ 74], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 102], 00:19:19.575 | 99.00th=[ 109], 99.50th=[ 109], 99.90th=[ 113], 99.95th=[ 121], 00:19:19.575 | 99.99th=[ 121] 00:19:19.575 bw ( KiB/s): min= 656, max= 1056, per=4.17%, avg=925.60, stdev=99.79, samples=20 00:19:19.575 iops : min= 164, max= 264, avg=231.40, stdev=24.95, samples=20 00:19:19.575 lat (msec) : 50=22.88%, 100=71.97%, 250=5.15% 00:19:19.575 cpu : usr=34.36%, sys=1.78%, ctx=998, majf=0, minf=9 00:19:19.575 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=82.6%, 16=16.5%, 32=0.0%, >=64=0.0% 00:19:19.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.575 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.575 issued rwts: total=2330,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.575 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:19.575 filename2: (groupid=0, jobs=1): err= 0: pid=86542: Sun Dec 15 05:59:39 2024 00:19:19.575 read: IOPS=231, BW=928KiB/s (950kB/s)(9316KiB/10041msec) 00:19:19.575 slat (usec): min=7, max=5030, avg=21.74, stdev=177.56 00:19:19.575 clat (msec): min=17, max=143, avg=68.84, stdev=19.52 00:19:19.575 lat (msec): min=17, max=143, avg=68.86, stdev=19.53 00:19:19.575 clat percentiles (msec): 00:19:19.575 | 1.00th=[ 37], 5.00th=[ 43], 10.00th=[ 46], 20.00th=[ 50], 00:19:19.575 | 30.00th=[ 57], 40.00th=[ 64], 50.00th=[ 69], 60.00th=[ 72], 00:19:19.575 | 70.00th=[ 77], 80.00th=[ 84], 90.00th=[ 97], 95.00th=[ 106], 00:19:19.575 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 133], 99.95th=[ 144], 00:19:19.575 | 99.99th=[ 144] 00:19:19.575 bw ( KiB/s): min= 512, max= 1136, per=4.17%, avg=925.20, stdev=142.88, samples=20 00:19:19.575 iops : min= 128, max= 284, avg=231.30, stdev=35.72, samples=20 00:19:19.575 lat (msec) : 20=0.69%, 50=20.61%, 100=70.67%, 250=8.03% 00:19:19.575 cpu : usr=41.27%, sys=2.47%, ctx=1705, majf=0, minf=9 00:19:19.575 IO depths : 1=0.1%, 2=0.9%, 4=3.4%, 8=79.8%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:19.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.575 complete : 0=0.0%, 4=88.1%, 8=11.1%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.575 issued rwts: total=2329,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.575 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:19.575 filename2: (groupid=0, jobs=1): err= 0: pid=86543: Sun Dec 15 05:59:39 2024 00:19:19.575 read: IOPS=234, BW=939KiB/s (962kB/s)(9404KiB/10011msec) 00:19:19.575 slat (usec): min=3, max=8026, avg=31.09, stdev=297.69 00:19:19.575 clat (msec): min=25, max=143, avg=67.99, stdev=19.85 00:19:19.575 lat (msec): min=25, max=143, avg=68.02, stdev=19.84 00:19:19.575 clat percentiles (msec): 00:19:19.575 | 1.00th=[ 36], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 48], 00:19:19.575 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 72], 00:19:19.575 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 107], 00:19:19.575 | 99.00th=[ 128], 99.50th=[ 132], 99.90th=[ 133], 99.95th=[ 144], 00:19:19.575 | 99.99th=[ 144] 00:19:19.575 bw ( KiB/s): min= 512, max= 1072, per=4.21%, avg=933.21, stdev=144.63, samples=19 00:19:19.575 iops : min= 128, max= 268, avg=233.26, stdev=36.21, samples=19 00:19:19.575 lat (msec) : 50=23.95%, 100=68.31%, 250=7.74% 00:19:19.575 cpu : usr=45.61%, sys=2.60%, ctx=1074, majf=0, minf=9 00:19:19.575 IO depths : 1=0.1%, 2=1.0%, 4=4.0%, 8=79.5%, 16=15.4%, 32=0.0%, >=64=0.0% 00:19:19.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.575 complete : 0=0.0%, 4=88.0%, 8=11.1%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.575 issued rwts: total=2351,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.575 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:19.575 filename2: (groupid=0, jobs=1): err= 0: pid=86544: Sun Dec 15 05:59:39 2024 00:19:19.575 read: IOPS=249, BW=998KiB/s (1022kB/s)(9988KiB/10008msec) 00:19:19.575 slat (usec): min=4, max=8031, avg=42.86, stdev=431.96 00:19:19.575 clat (usec): min=1158, max=125000, avg=63964.37, stdev=19759.00 00:19:19.575 lat (usec): min=1167, max=125013, avg=64007.23, stdev=19760.78 00:19:19.575 clat percentiles (msec): 00:19:19.575 | 1.00th=[ 4], 5.00th=[ 37], 10.00th=[ 45], 20.00th=[ 48], 00:19:19.575 | 30.00th=[ 51], 40.00th=[ 59], 50.00th=[ 64], 60.00th=[ 70], 00:19:19.575 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 93], 95.00th=[ 99], 00:19:19.575 | 99.00th=[ 110], 99.50th=[ 110], 99.90th=[ 115], 99.95th=[ 126], 00:19:19.575 | 99.99th=[ 126] 00:19:19.575 bw ( KiB/s): min= 768, max= 1056, per=4.38%, avg=972.79, stdev=93.72, samples=19 00:19:19.575 iops : min= 192, max= 264, avg=243.16, stdev=23.40, samples=19 00:19:19.575 lat (msec) : 2=0.36%, 4=0.64%, 10=0.88%, 50=28.23%, 100=65.60% 00:19:19.575 lat (msec) : 250=4.29% 00:19:19.575 cpu : usr=36.95%, sys=2.03%, ctx=1084, majf=0, minf=9 00:19:19.575 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=83.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:19:19.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.575 complete : 0=0.0%, 4=86.8%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.575 issued rwts: total=2497,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.575 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:19.575 00:19:19.575 Run status group 0 (all jobs): 00:19:19.575 READ: bw=21.7MiB/s (22.7MB/s), 867KiB/s-998KiB/s (888kB/s-1022kB/s), io=218MiB (228MB), run=10002-10055msec 00:19:19.575 05:59:39 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:19:19.575 05:59:39 -- target/dif.sh@43 -- # local sub 00:19:19.575 05:59:39 -- target/dif.sh@45 -- # for sub in "$@" 00:19:19.575 05:59:39 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:19.575 05:59:39 -- target/dif.sh@36 -- # local sub_id=0 00:19:19.575 05:59:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:19.575 05:59:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.575 05:59:39 -- common/autotest_common.sh@10 -- # set +x 00:19:19.575 05:59:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.575 05:59:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:19.575 05:59:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.575 05:59:39 -- common/autotest_common.sh@10 -- # set +x 00:19:19.575 05:59:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.575 05:59:39 -- target/dif.sh@45 -- # for sub in "$@" 00:19:19.575 05:59:39 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:19.575 05:59:39 -- target/dif.sh@36 -- # local sub_id=1 00:19:19.575 05:59:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:19.575 05:59:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.575 05:59:39 -- common/autotest_common.sh@10 -- # set +x 00:19:19.575 05:59:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.575 05:59:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:19.575 05:59:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.575 05:59:39 -- common/autotest_common.sh@10 -- # set +x 00:19:19.575 05:59:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.575 05:59:39 -- target/dif.sh@45 -- # for sub in "$@" 00:19:19.575 05:59:39 -- target/dif.sh@46 -- # destroy_subsystem 2 00:19:19.575 05:59:39 -- target/dif.sh@36 -- # local sub_id=2 00:19:19.575 05:59:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:19.575 05:59:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.575 05:59:39 -- common/autotest_common.sh@10 -- # set +x 00:19:19.575 05:59:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.575 05:59:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:19:19.575 05:59:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.575 05:59:39 -- common/autotest_common.sh@10 -- # set +x 00:19:19.575 05:59:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.575 05:59:39 -- target/dif.sh@115 -- # NULL_DIF=1 00:19:19.575 05:59:39 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:19:19.575 05:59:39 -- target/dif.sh@115 -- # numjobs=2 00:19:19.575 05:59:39 -- target/dif.sh@115 -- # iodepth=8 00:19:19.575 05:59:39 -- target/dif.sh@115 -- # runtime=5 00:19:19.575 05:59:39 -- target/dif.sh@115 -- # files=1 00:19:19.575 05:59:39 -- target/dif.sh@117 -- # create_subsystems 0 1 00:19:19.575 05:59:39 -- target/dif.sh@28 -- # local sub 00:19:19.575 05:59:39 -- target/dif.sh@30 -- # for sub in "$@" 00:19:19.575 05:59:39 -- target/dif.sh@31 -- # create_subsystem 0 00:19:19.575 05:59:39 -- target/dif.sh@18 -- # local sub_id=0 00:19:19.575 05:59:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:19.576 05:59:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.576 05:59:39 -- common/autotest_common.sh@10 -- # set +x 00:19:19.576 bdev_null0 00:19:19.576 05:59:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.576 05:59:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:19.576 05:59:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.576 05:59:39 -- common/autotest_common.sh@10 -- # set +x 00:19:19.576 05:59:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.576 05:59:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:19.576 05:59:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.576 05:59:39 -- common/autotest_common.sh@10 -- # set +x 00:19:19.576 05:59:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.576 05:59:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:19.576 05:59:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.576 05:59:39 -- common/autotest_common.sh@10 -- # set +x 00:19:19.576 [2024-12-15 05:59:39.373949] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.576 05:59:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.576 05:59:39 -- target/dif.sh@30 -- # for sub in "$@" 00:19:19.576 05:59:39 -- target/dif.sh@31 -- # create_subsystem 1 00:19:19.576 05:59:39 -- target/dif.sh@18 -- # local sub_id=1 00:19:19.576 05:59:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:19.576 05:59:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.576 05:59:39 -- common/autotest_common.sh@10 -- # set +x 00:19:19.576 bdev_null1 00:19:19.576 05:59:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.576 05:59:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:19.576 05:59:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.576 05:59:39 -- common/autotest_common.sh@10 -- # set +x 00:19:19.576 05:59:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.576 05:59:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:19.576 05:59:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.576 05:59:39 -- common/autotest_common.sh@10 -- # set +x 00:19:19.576 05:59:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.576 05:59:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:19.576 05:59:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.576 05:59:39 -- common/autotest_common.sh@10 -- # set +x 00:19:19.576 05:59:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.576 05:59:39 -- target/dif.sh@118 -- # fio /dev/fd/62 00:19:19.576 05:59:39 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:19:19.576 05:59:39 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:19.576 05:59:39 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:19.576 05:59:39 -- nvmf/common.sh@520 -- # config=() 00:19:19.576 05:59:39 -- nvmf/common.sh@520 -- # local subsystem config 00:19:19.576 05:59:39 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:19.576 05:59:39 -- target/dif.sh@82 -- # gen_fio_conf 00:19:19.576 05:59:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:19.576 05:59:39 -- target/dif.sh@54 -- # local file 00:19:19.576 05:59:39 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:19.576 05:59:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:19.576 { 00:19:19.576 "params": { 00:19:19.576 "name": "Nvme$subsystem", 00:19:19.576 "trtype": "$TEST_TRANSPORT", 00:19:19.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.576 "adrfam": "ipv4", 00:19:19.576 "trsvcid": "$NVMF_PORT", 00:19:19.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.576 "hdgst": ${hdgst:-false}, 00:19:19.576 "ddgst": ${ddgst:-false} 00:19:19.576 }, 00:19:19.576 "method": "bdev_nvme_attach_controller" 00:19:19.576 } 00:19:19.576 EOF 00:19:19.576 )") 00:19:19.576 05:59:39 -- target/dif.sh@56 -- # cat 00:19:19.576 05:59:39 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:19.576 05:59:39 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:19.576 05:59:39 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:19.576 05:59:39 -- common/autotest_common.sh@1330 -- # shift 00:19:19.576 05:59:39 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:19.576 05:59:39 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:19.576 05:59:39 -- nvmf/common.sh@542 -- # cat 00:19:19.576 05:59:39 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:19.576 05:59:39 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:19.576 05:59:39 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:19.576 05:59:39 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:19.576 05:59:39 -- target/dif.sh@72 -- # (( file <= files )) 00:19:19.576 05:59:39 -- target/dif.sh@73 -- # cat 00:19:19.576 05:59:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:19.576 05:59:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:19.576 { 00:19:19.576 "params": { 00:19:19.576 "name": "Nvme$subsystem", 00:19:19.576 "trtype": "$TEST_TRANSPORT", 00:19:19.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.576 "adrfam": "ipv4", 00:19:19.576 "trsvcid": "$NVMF_PORT", 00:19:19.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.576 "hdgst": ${hdgst:-false}, 00:19:19.576 "ddgst": ${ddgst:-false} 00:19:19.576 }, 00:19:19.576 "method": "bdev_nvme_attach_controller" 00:19:19.576 } 00:19:19.576 EOF 00:19:19.576 )") 00:19:19.576 05:59:39 -- nvmf/common.sh@542 -- # cat 00:19:19.576 05:59:39 -- target/dif.sh@72 -- # (( file++ )) 00:19:19.576 05:59:39 -- target/dif.sh@72 -- # (( file <= files )) 00:19:19.576 05:59:39 -- nvmf/common.sh@544 -- # jq . 00:19:19.576 05:59:39 -- nvmf/common.sh@545 -- # IFS=, 00:19:19.576 05:59:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:19.576 "params": { 00:19:19.576 "name": "Nvme0", 00:19:19.576 "trtype": "tcp", 00:19:19.576 "traddr": "10.0.0.2", 00:19:19.576 "adrfam": "ipv4", 00:19:19.576 "trsvcid": "4420", 00:19:19.576 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:19.576 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:19.576 "hdgst": false, 00:19:19.576 "ddgst": false 00:19:19.576 }, 00:19:19.576 "method": "bdev_nvme_attach_controller" 00:19:19.576 },{ 00:19:19.576 "params": { 00:19:19.576 "name": "Nvme1", 00:19:19.576 "trtype": "tcp", 00:19:19.576 "traddr": "10.0.0.2", 00:19:19.576 "adrfam": "ipv4", 00:19:19.576 "trsvcid": "4420", 00:19:19.576 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:19.576 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:19.576 "hdgst": false, 00:19:19.576 "ddgst": false 00:19:19.576 }, 00:19:19.576 "method": "bdev_nvme_attach_controller" 00:19:19.576 }' 00:19:19.576 05:59:39 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:19.576 05:59:39 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:19.576 05:59:39 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:19.576 05:59:39 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:19.576 05:59:39 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:19.576 05:59:39 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:19.576 05:59:39 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:19.576 05:59:39 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:19.576 05:59:39 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:19.576 05:59:39 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:19.576 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:19.576 ... 00:19:19.576 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:19.576 ... 00:19:19.576 fio-3.35 00:19:19.576 Starting 4 threads 00:19:19.576 [2024-12-15 05:59:39.990020] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:19.576 [2024-12-15 05:59:39.990079] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:23.764 00:19:23.764 filename0: (groupid=0, jobs=1): err= 0: pid=86687: Sun Dec 15 05:59:45 2024 00:19:23.764 read: IOPS=2249, BW=17.6MiB/s (18.4MB/s)(87.9MiB/5003msec) 00:19:23.764 slat (nsec): min=7225, max=71947, avg=12133.29, stdev=4958.82 00:19:23.764 clat (usec): min=1255, max=6744, avg=3524.23, stdev=1038.61 00:19:23.764 lat (usec): min=1263, max=6788, avg=3536.36, stdev=1038.53 00:19:23.764 clat percentiles (usec): 00:19:23.764 | 1.00th=[ 1926], 5.00th=[ 1991], 10.00th=[ 2024], 20.00th=[ 2540], 00:19:23.764 | 30.00th=[ 2868], 40.00th=[ 2966], 50.00th=[ 3326], 60.00th=[ 4228], 00:19:23.764 | 70.00th=[ 4621], 80.00th=[ 4686], 90.00th=[ 4752], 95.00th=[ 4817], 00:19:23.764 | 99.00th=[ 4883], 99.50th=[ 4883], 99.90th=[ 4948], 99.95th=[ 4948], 00:19:23.764 | 99.99th=[ 5080] 00:19:23.764 bw ( KiB/s): min=16721, max=18240, per=26.69%, avg=18001.70, stdev=454.65, samples=10 00:19:23.764 iops : min= 2090, max= 2280, avg=2250.20, stdev=56.87, samples=10 00:19:23.764 lat (msec) : 2=6.90%, 4=51.20%, 10=41.90% 00:19:23.764 cpu : usr=91.42%, sys=7.58%, ctx=12, majf=0, minf=0 00:19:23.764 IO depths : 1=0.2%, 2=2.1%, 4=62.5%, 8=35.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.764 complete : 0=0.0%, 4=99.2%, 8=0.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.764 issued rwts: total=11256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.764 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:23.764 filename0: (groupid=0, jobs=1): err= 0: pid=86688: Sun Dec 15 05:59:45 2024 00:19:23.764 read: IOPS=1685, BW=13.2MiB/s (13.8MB/s)(65.9MiB/5001msec) 00:19:23.764 slat (nsec): min=6892, max=66643, avg=10706.96, stdev=4437.72 00:19:23.764 clat (usec): min=772, max=6263, avg=4701.19, stdev=361.42 00:19:23.764 lat (usec): min=779, max=6278, avg=4711.90, stdev=359.78 00:19:23.764 clat percentiles (usec): 00:19:23.764 | 1.00th=[ 3654], 5.00th=[ 3818], 10.00th=[ 3949], 20.00th=[ 4752], 00:19:23.764 | 30.00th=[ 4752], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4817], 00:19:23.764 | 70.00th=[ 4817], 80.00th=[ 4883], 90.00th=[ 4883], 95.00th=[ 4948], 00:19:23.764 | 99.00th=[ 5145], 99.50th=[ 5932], 99.90th=[ 6063], 99.95th=[ 6063], 00:19:23.764 | 99.99th=[ 6259] 00:19:23.764 bw ( KiB/s): min=13184, max=16000, per=20.01%, avg=13496.89, stdev=938.67, samples=9 00:19:23.764 iops : min= 1648, max= 2000, avg=1687.11, stdev=117.33, samples=9 00:19:23.764 lat (usec) : 1000=0.06% 00:19:23.764 lat (msec) : 4=10.57%, 10=89.37% 00:19:23.764 cpu : usr=91.90%, sys=7.32%, ctx=15, majf=0, minf=9 00:19:23.764 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.764 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.764 issued rwts: total=8429,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.764 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:23.764 filename1: (groupid=0, jobs=1): err= 0: pid=86689: Sun Dec 15 05:59:45 2024 00:19:23.764 read: IOPS=2250, BW=17.6MiB/s (18.4MB/s)(87.9MiB/5001msec) 00:19:23.764 slat (nsec): min=7366, max=63038, avg=15066.67, stdev=4089.48 00:19:23.764 clat (usec): min=952, max=6795, avg=3517.88, stdev=1028.87 00:19:23.764 lat (usec): min=959, max=6809, avg=3532.94, stdev=1028.40 00:19:23.764 clat percentiles (usec): 00:19:23.764 | 1.00th=[ 1942], 5.00th=[ 2008], 10.00th=[ 2040], 20.00th=[ 2540], 00:19:23.764 | 30.00th=[ 2868], 40.00th=[ 2933], 50.00th=[ 3326], 60.00th=[ 4228], 00:19:23.764 | 70.00th=[ 4621], 80.00th=[ 4686], 90.00th=[ 4752], 95.00th=[ 4752], 00:19:23.764 | 99.00th=[ 4883], 99.50th=[ 4883], 99.90th=[ 4948], 99.95th=[ 4948], 00:19:23.764 | 99.99th=[ 4948] 00:19:23.764 bw ( KiB/s): min=16624, max=18240, per=26.65%, avg=17975.11, stdev=509.31, samples=9 00:19:23.764 iops : min= 2078, max= 2280, avg=2246.89, stdev=63.66, samples=9 00:19:23.764 lat (usec) : 1000=0.03% 00:19:23.764 lat (msec) : 2=4.53%, 4=53.68%, 10=41.76% 00:19:23.764 cpu : usr=91.78%, sys=7.28%, ctx=9, majf=0, minf=9 00:19:23.764 IO depths : 1=0.1%, 2=2.0%, 4=62.6%, 8=35.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.764 complete : 0=0.0%, 4=99.2%, 8=0.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.764 issued rwts: total=11253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.764 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:23.764 filename1: (groupid=0, jobs=1): err= 0: pid=86690: Sun Dec 15 05:59:45 2024 00:19:23.764 read: IOPS=2247, BW=17.6MiB/s (18.4MB/s)(87.9MiB/5004msec) 00:19:23.764 slat (nsec): min=7143, max=62931, avg=15164.86, stdev=4185.17 00:19:23.764 clat (usec): min=768, max=7791, avg=3522.45, stdev=1041.06 00:19:23.764 lat (usec): min=776, max=7804, avg=3537.61, stdev=1040.76 00:19:23.764 clat percentiles (usec): 00:19:23.764 | 1.00th=[ 1942], 5.00th=[ 2008], 10.00th=[ 2040], 20.00th=[ 2573], 00:19:23.764 | 30.00th=[ 2868], 40.00th=[ 2933], 50.00th=[ 3294], 60.00th=[ 4228], 00:19:23.764 | 70.00th=[ 4621], 80.00th=[ 4686], 90.00th=[ 4752], 95.00th=[ 4817], 00:19:23.764 | 99.00th=[ 4883], 99.50th=[ 4883], 99.90th=[ 6652], 99.95th=[ 7767], 00:19:23.764 | 99.99th=[ 7767] 00:19:23.764 bw ( KiB/s): min=16512, max=18240, per=26.66%, avg=17980.80, stdev=520.15, samples=10 00:19:23.764 iops : min= 2064, max= 2280, avg=2247.60, stdev=65.02, samples=10 00:19:23.764 lat (usec) : 1000=0.04% 00:19:23.764 lat (msec) : 2=4.31%, 4=53.72%, 10=41.93% 00:19:23.764 cpu : usr=92.34%, sys=6.38%, ctx=169, majf=0, minf=0 00:19:23.764 IO depths : 1=0.1%, 2=2.0%, 4=62.6%, 8=35.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.764 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.764 issued rwts: total=11245,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.764 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:23.764 00:19:23.764 Run status group 0 (all jobs): 00:19:23.764 READ: bw=65.9MiB/s (69.1MB/s), 13.2MiB/s-17.6MiB/s (13.8MB/s-18.4MB/s), io=330MiB (346MB), run=5001-5004msec 00:19:23.764 05:59:45 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:19:23.764 05:59:45 -- target/dif.sh@43 -- # local sub 00:19:23.764 05:59:45 -- target/dif.sh@45 -- # for sub in "$@" 00:19:23.764 05:59:45 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:23.764 05:59:45 -- target/dif.sh@36 -- # local sub_id=0 00:19:23.764 05:59:45 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:23.764 05:59:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.764 05:59:45 -- common/autotest_common.sh@10 -- # set +x 00:19:23.764 05:59:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.764 05:59:45 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:23.764 05:59:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.764 05:59:45 -- common/autotest_common.sh@10 -- # set +x 00:19:23.764 05:59:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.764 05:59:45 -- target/dif.sh@45 -- # for sub in "$@" 00:19:23.764 05:59:45 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:23.764 05:59:45 -- target/dif.sh@36 -- # local sub_id=1 00:19:23.764 05:59:45 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:23.764 05:59:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.764 05:59:45 -- common/autotest_common.sh@10 -- # set +x 00:19:23.764 05:59:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.764 05:59:45 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:23.764 05:59:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.764 05:59:45 -- common/autotest_common.sh@10 -- # set +x 00:19:23.764 ************************************ 00:19:23.764 END TEST fio_dif_rand_params 00:19:23.764 ************************************ 00:19:23.764 05:59:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.764 00:19:23.764 real 0m23.010s 00:19:23.764 user 2m3.445s 00:19:23.765 sys 0m8.058s 00:19:23.765 05:59:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:23.765 05:59:45 -- common/autotest_common.sh@10 -- # set +x 00:19:23.765 05:59:45 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:19:23.765 05:59:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:23.765 05:59:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:23.765 05:59:45 -- common/autotest_common.sh@10 -- # set +x 00:19:23.765 ************************************ 00:19:23.765 START TEST fio_dif_digest 00:19:23.765 ************************************ 00:19:23.765 05:59:45 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:19:23.765 05:59:45 -- target/dif.sh@123 -- # local NULL_DIF 00:19:23.765 05:59:45 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:19:23.765 05:59:45 -- target/dif.sh@125 -- # local hdgst ddgst 00:19:23.765 05:59:45 -- target/dif.sh@127 -- # NULL_DIF=3 00:19:23.765 05:59:45 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:19:23.765 05:59:45 -- target/dif.sh@127 -- # numjobs=3 00:19:23.765 05:59:45 -- target/dif.sh@127 -- # iodepth=3 00:19:23.765 05:59:45 -- target/dif.sh@127 -- # runtime=10 00:19:23.765 05:59:45 -- target/dif.sh@128 -- # hdgst=true 00:19:23.765 05:59:45 -- target/dif.sh@128 -- # ddgst=true 00:19:23.765 05:59:45 -- target/dif.sh@130 -- # create_subsystems 0 00:19:23.765 05:59:45 -- target/dif.sh@28 -- # local sub 00:19:23.765 05:59:45 -- target/dif.sh@30 -- # for sub in "$@" 00:19:23.765 05:59:45 -- target/dif.sh@31 -- # create_subsystem 0 00:19:23.765 05:59:45 -- target/dif.sh@18 -- # local sub_id=0 00:19:23.765 05:59:45 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:23.765 05:59:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.765 05:59:45 -- common/autotest_common.sh@10 -- # set +x 00:19:23.765 bdev_null0 00:19:23.765 05:59:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.765 05:59:45 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:23.765 05:59:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.765 05:59:45 -- common/autotest_common.sh@10 -- # set +x 00:19:23.765 05:59:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.765 05:59:45 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:23.765 05:59:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.765 05:59:45 -- common/autotest_common.sh@10 -- # set +x 00:19:23.765 05:59:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.765 05:59:45 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:23.765 05:59:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.765 05:59:45 -- common/autotest_common.sh@10 -- # set +x 00:19:23.765 [2024-12-15 05:59:45.382438] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:23.765 05:59:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.765 05:59:45 -- target/dif.sh@131 -- # fio /dev/fd/62 00:19:23.765 05:59:45 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:19:23.765 05:59:45 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:23.765 05:59:45 -- nvmf/common.sh@520 -- # config=() 00:19:23.765 05:59:45 -- nvmf/common.sh@520 -- # local subsystem config 00:19:23.765 05:59:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:23.765 05:59:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:23.765 { 00:19:23.765 "params": { 00:19:23.765 "name": "Nvme$subsystem", 00:19:23.765 "trtype": "$TEST_TRANSPORT", 00:19:23.765 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:23.765 "adrfam": "ipv4", 00:19:23.765 "trsvcid": "$NVMF_PORT", 00:19:23.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:23.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:23.765 "hdgst": ${hdgst:-false}, 00:19:23.765 "ddgst": ${ddgst:-false} 00:19:23.765 }, 00:19:23.765 "method": "bdev_nvme_attach_controller" 00:19:23.765 } 00:19:23.765 EOF 00:19:23.765 )") 00:19:23.765 05:59:45 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:23.765 05:59:45 -- target/dif.sh@82 -- # gen_fio_conf 00:19:23.765 05:59:45 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:23.765 05:59:45 -- target/dif.sh@54 -- # local file 00:19:23.765 05:59:45 -- target/dif.sh@56 -- # cat 00:19:23.765 05:59:45 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:23.765 05:59:45 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:23.765 05:59:45 -- nvmf/common.sh@542 -- # cat 00:19:23.765 05:59:45 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:23.765 05:59:45 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:23.765 05:59:45 -- common/autotest_common.sh@1330 -- # shift 00:19:23.765 05:59:45 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:23.765 05:59:45 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:23.765 05:59:45 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:23.765 05:59:45 -- target/dif.sh@72 -- # (( file <= files )) 00:19:23.765 05:59:45 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:23.765 05:59:45 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:23.765 05:59:45 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:23.765 05:59:45 -- nvmf/common.sh@544 -- # jq . 00:19:23.765 05:59:45 -- nvmf/common.sh@545 -- # IFS=, 00:19:23.765 05:59:45 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:23.765 "params": { 00:19:23.765 "name": "Nvme0", 00:19:23.765 "trtype": "tcp", 00:19:23.765 "traddr": "10.0.0.2", 00:19:23.765 "adrfam": "ipv4", 00:19:23.765 "trsvcid": "4420", 00:19:23.765 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:23.765 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:23.765 "hdgst": true, 00:19:23.765 "ddgst": true 00:19:23.765 }, 00:19:23.765 "method": "bdev_nvme_attach_controller" 00:19:23.765 }' 00:19:24.023 05:59:45 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:24.023 05:59:45 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:24.023 05:59:45 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:24.023 05:59:45 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:24.023 05:59:45 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:24.023 05:59:45 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:24.023 05:59:45 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:24.023 05:59:45 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:24.023 05:59:45 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:24.023 05:59:45 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:24.023 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:24.023 ... 00:19:24.023 fio-3.35 00:19:24.023 Starting 3 threads 00:19:24.281 [2024-12-15 05:59:45.899894] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:24.281 [2024-12-15 05:59:45.899984] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:36.507 00:19:36.507 filename0: (groupid=0, jobs=1): err= 0: pid=86796: Sun Dec 15 05:59:56 2024 00:19:36.507 read: IOPS=233, BW=29.2MiB/s (30.7MB/s)(293MiB/10002msec) 00:19:36.507 slat (nsec): min=7150, max=62710, avg=16629.95, stdev=5816.69 00:19:36.507 clat (usec): min=11889, max=14284, avg=12786.29, stdev=474.29 00:19:36.507 lat (usec): min=11902, max=14310, avg=12802.92, stdev=474.62 00:19:36.507 clat percentiles (usec): 00:19:36.507 | 1.00th=[11994], 5.00th=[12125], 10.00th=[12125], 20.00th=[12387], 00:19:36.507 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:19:36.507 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13435], 95.00th=[13566], 00:19:36.507 | 99.00th=[13698], 99.50th=[13960], 99.90th=[14222], 99.95th=[14222], 00:19:36.507 | 99.99th=[14222] 00:19:36.507 bw ( KiB/s): min=28416, max=31488, per=33.35%, avg=29955.16, stdev=809.66, samples=19 00:19:36.507 iops : min= 222, max= 246, avg=234.00, stdev= 6.32, samples=19 00:19:36.507 lat (msec) : 20=100.00% 00:19:36.507 cpu : usr=91.52%, sys=7.89%, ctx=12, majf=0, minf=9 00:19:36.507 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:36.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.507 issued rwts: total=2340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.507 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:36.507 filename0: (groupid=0, jobs=1): err= 0: pid=86797: Sun Dec 15 05:59:56 2024 00:19:36.507 read: IOPS=233, BW=29.2MiB/s (30.7MB/s)(293MiB/10002msec) 00:19:36.507 slat (nsec): min=7229, max=54213, avg=16374.69, stdev=5478.17 00:19:36.507 clat (usec): min=11880, max=14151, avg=12785.94, stdev=472.25 00:19:36.507 lat (usec): min=11893, max=14177, avg=12802.31, stdev=472.65 00:19:36.507 clat percentiles (usec): 00:19:36.507 | 1.00th=[11994], 5.00th=[12125], 10.00th=[12125], 20.00th=[12387], 00:19:36.507 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:19:36.507 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13435], 95.00th=[13566], 00:19:36.507 | 99.00th=[13698], 99.50th=[13960], 99.90th=[14091], 99.95th=[14091], 00:19:36.507 | 99.99th=[14091] 00:19:36.507 bw ( KiB/s): min=28472, max=31488, per=33.36%, avg=29958.11, stdev=803.83, samples=19 00:19:36.507 iops : min= 222, max= 246, avg=234.00, stdev= 6.32, samples=19 00:19:36.507 lat (msec) : 20=100.00% 00:19:36.507 cpu : usr=91.66%, sys=7.71%, ctx=21, majf=0, minf=11 00:19:36.507 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:36.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.507 issued rwts: total=2340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.507 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:36.507 filename0: (groupid=0, jobs=1): err= 0: pid=86798: Sun Dec 15 05:59:56 2024 00:19:36.507 read: IOPS=233, BW=29.2MiB/s (30.7MB/s)(293MiB/10005msec) 00:19:36.507 slat (nsec): min=6945, max=65309, avg=15473.83, stdev=6174.71 00:19:36.507 clat (usec): min=11884, max=16569, avg=12792.53, stdev=489.56 00:19:36.507 lat (usec): min=11897, max=16596, avg=12808.00, stdev=490.05 00:19:36.507 clat percentiles (usec): 00:19:36.507 | 1.00th=[11994], 5.00th=[12125], 10.00th=[12125], 20.00th=[12387], 00:19:36.507 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:19:36.507 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13435], 95.00th=[13566], 00:19:36.507 | 99.00th=[13698], 99.50th=[13960], 99.90th=[16581], 99.95th=[16581], 00:19:36.507 | 99.99th=[16581] 00:19:36.507 bw ( KiB/s): min=28416, max=31488, per=33.34%, avg=29945.68, stdev=810.01, samples=19 00:19:36.507 iops : min= 222, max= 246, avg=233.95, stdev= 6.33, samples=19 00:19:36.507 lat (msec) : 20=100.00% 00:19:36.507 cpu : usr=91.58%, sys=7.85%, ctx=15, majf=0, minf=9 00:19:36.507 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:36.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.507 issued rwts: total=2340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.507 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:36.507 00:19:36.507 Run status group 0 (all jobs): 00:19:36.507 READ: bw=87.7MiB/s (92.0MB/s), 29.2MiB/s-29.2MiB/s (30.7MB/s-30.7MB/s), io=878MiB (920MB), run=10002-10005msec 00:19:36.507 05:59:56 -- target/dif.sh@132 -- # destroy_subsystems 0 00:19:36.507 05:59:56 -- target/dif.sh@43 -- # local sub 00:19:36.507 05:59:56 -- target/dif.sh@45 -- # for sub in "$@" 00:19:36.507 05:59:56 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:36.507 05:59:56 -- target/dif.sh@36 -- # local sub_id=0 00:19:36.507 05:59:56 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:36.507 05:59:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.507 05:59:56 -- common/autotest_common.sh@10 -- # set +x 00:19:36.507 05:59:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.507 05:59:56 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:36.507 05:59:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.507 05:59:56 -- common/autotest_common.sh@10 -- # set +x 00:19:36.507 ************************************ 00:19:36.507 END TEST fio_dif_digest 00:19:36.507 ************************************ 00:19:36.507 05:59:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.507 00:19:36.507 real 0m10.848s 00:19:36.507 user 0m28.027s 00:19:36.507 sys 0m2.553s 00:19:36.507 05:59:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:36.507 05:59:56 -- common/autotest_common.sh@10 -- # set +x 00:19:36.507 05:59:56 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:36.507 05:59:56 -- target/dif.sh@147 -- # nvmftestfini 00:19:36.507 05:59:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:36.507 05:59:56 -- nvmf/common.sh@116 -- # sync 00:19:36.507 05:59:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:36.507 05:59:56 -- nvmf/common.sh@119 -- # set +e 00:19:36.507 05:59:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:36.507 05:59:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:36.507 rmmod nvme_tcp 00:19:36.507 rmmod nvme_fabrics 00:19:36.507 rmmod nvme_keyring 00:19:36.507 05:59:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:36.507 05:59:56 -- nvmf/common.sh@123 -- # set -e 00:19:36.507 05:59:56 -- nvmf/common.sh@124 -- # return 0 00:19:36.507 05:59:56 -- nvmf/common.sh@477 -- # '[' -n 86034 ']' 00:19:36.507 05:59:56 -- nvmf/common.sh@478 -- # killprocess 86034 00:19:36.507 05:59:56 -- common/autotest_common.sh@936 -- # '[' -z 86034 ']' 00:19:36.507 05:59:56 -- common/autotest_common.sh@940 -- # kill -0 86034 00:19:36.507 05:59:56 -- common/autotest_common.sh@941 -- # uname 00:19:36.507 05:59:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:36.507 05:59:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86034 00:19:36.507 killing process with pid 86034 00:19:36.507 05:59:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:36.507 05:59:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:36.507 05:59:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86034' 00:19:36.507 05:59:56 -- common/autotest_common.sh@955 -- # kill 86034 00:19:36.507 05:59:56 -- common/autotest_common.sh@960 -- # wait 86034 00:19:36.507 05:59:56 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:19:36.507 05:59:56 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:36.507 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:36.507 Waiting for block devices as requested 00:19:36.507 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:19:36.507 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:19:36.507 05:59:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:36.507 05:59:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:36.507 05:59:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:36.507 05:59:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:36.507 05:59:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.507 05:59:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:36.507 05:59:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.507 05:59:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:36.507 ************************************ 00:19:36.507 END TEST nvmf_dif 00:19:36.507 ************************************ 00:19:36.507 00:19:36.507 real 0m58.829s 00:19:36.507 user 3m47.010s 00:19:36.507 sys 0m18.749s 00:19:36.507 05:59:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:36.507 05:59:57 -- common/autotest_common.sh@10 -- # set +x 00:19:36.508 05:59:57 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:36.508 05:59:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:36.508 05:59:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:36.508 05:59:57 -- common/autotest_common.sh@10 -- # set +x 00:19:36.508 ************************************ 00:19:36.508 START TEST nvmf_abort_qd_sizes 00:19:36.508 ************************************ 00:19:36.508 05:59:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:36.508 * Looking for test storage... 00:19:36.508 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:36.508 05:59:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:36.508 05:59:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:36.508 05:59:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:36.508 05:59:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:36.508 05:59:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:36.508 05:59:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:36.508 05:59:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:36.508 05:59:57 -- scripts/common.sh@335 -- # IFS=.-: 00:19:36.508 05:59:57 -- scripts/common.sh@335 -- # read -ra ver1 00:19:36.508 05:59:57 -- scripts/common.sh@336 -- # IFS=.-: 00:19:36.508 05:59:57 -- scripts/common.sh@336 -- # read -ra ver2 00:19:36.508 05:59:57 -- scripts/common.sh@337 -- # local 'op=<' 00:19:36.508 05:59:57 -- scripts/common.sh@339 -- # ver1_l=2 00:19:36.508 05:59:57 -- scripts/common.sh@340 -- # ver2_l=1 00:19:36.508 05:59:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:36.508 05:59:57 -- scripts/common.sh@343 -- # case "$op" in 00:19:36.508 05:59:57 -- scripts/common.sh@344 -- # : 1 00:19:36.508 05:59:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:36.508 05:59:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:36.508 05:59:57 -- scripts/common.sh@364 -- # decimal 1 00:19:36.508 05:59:57 -- scripts/common.sh@352 -- # local d=1 00:19:36.508 05:59:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:36.508 05:59:57 -- scripts/common.sh@354 -- # echo 1 00:19:36.508 05:59:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:36.508 05:59:57 -- scripts/common.sh@365 -- # decimal 2 00:19:36.508 05:59:57 -- scripts/common.sh@352 -- # local d=2 00:19:36.508 05:59:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:36.508 05:59:57 -- scripts/common.sh@354 -- # echo 2 00:19:36.508 05:59:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:36.508 05:59:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:36.508 05:59:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:36.508 05:59:57 -- scripts/common.sh@367 -- # return 0 00:19:36.508 05:59:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:36.508 05:59:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:36.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.508 --rc genhtml_branch_coverage=1 00:19:36.508 --rc genhtml_function_coverage=1 00:19:36.508 --rc genhtml_legend=1 00:19:36.508 --rc geninfo_all_blocks=1 00:19:36.508 --rc geninfo_unexecuted_blocks=1 00:19:36.508 00:19:36.508 ' 00:19:36.508 05:59:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:36.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.508 --rc genhtml_branch_coverage=1 00:19:36.508 --rc genhtml_function_coverage=1 00:19:36.508 --rc genhtml_legend=1 00:19:36.508 --rc geninfo_all_blocks=1 00:19:36.508 --rc geninfo_unexecuted_blocks=1 00:19:36.508 00:19:36.508 ' 00:19:36.508 05:59:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:36.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.508 --rc genhtml_branch_coverage=1 00:19:36.508 --rc genhtml_function_coverage=1 00:19:36.508 --rc genhtml_legend=1 00:19:36.508 --rc geninfo_all_blocks=1 00:19:36.508 --rc geninfo_unexecuted_blocks=1 00:19:36.508 00:19:36.508 ' 00:19:36.508 05:59:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:36.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.508 --rc genhtml_branch_coverage=1 00:19:36.508 --rc genhtml_function_coverage=1 00:19:36.508 --rc genhtml_legend=1 00:19:36.508 --rc geninfo_all_blocks=1 00:19:36.508 --rc geninfo_unexecuted_blocks=1 00:19:36.508 00:19:36.508 ' 00:19:36.508 05:59:57 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:36.508 05:59:57 -- nvmf/common.sh@7 -- # uname -s 00:19:36.508 05:59:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:36.508 05:59:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:36.508 05:59:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:36.508 05:59:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:36.508 05:59:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:36.508 05:59:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:36.508 05:59:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:36.508 05:59:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:36.508 05:59:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:36.508 05:59:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:36.508 05:59:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 00:19:36.508 05:59:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=926ec8f8-6baf-4857-8f2f-72d8639146f9 00:19:36.508 05:59:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:36.508 05:59:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:36.508 05:59:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:36.508 05:59:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:36.508 05:59:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:36.508 05:59:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:36.508 05:59:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:36.508 05:59:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.508 05:59:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.508 05:59:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.508 05:59:57 -- paths/export.sh@5 -- # export PATH 00:19:36.508 05:59:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.508 05:59:57 -- nvmf/common.sh@46 -- # : 0 00:19:36.508 05:59:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:36.508 05:59:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:36.508 05:59:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:36.508 05:59:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:36.508 05:59:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:36.508 05:59:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:36.508 05:59:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:36.508 05:59:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:36.508 05:59:57 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:19:36.508 05:59:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:36.508 05:59:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:36.508 05:59:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:36.508 05:59:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:36.508 05:59:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:36.508 05:59:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.508 05:59:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:36.508 05:59:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.508 05:59:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:36.508 05:59:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:36.508 05:59:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:36.508 05:59:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:36.508 05:59:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:36.508 05:59:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:36.508 05:59:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:36.508 05:59:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:36.508 05:59:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:36.508 05:59:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:36.508 05:59:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:36.508 05:59:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:36.508 05:59:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:36.508 05:59:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:36.508 05:59:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:36.508 05:59:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:36.508 05:59:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:36.508 05:59:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:36.508 05:59:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:36.508 05:59:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:36.508 Cannot find device "nvmf_tgt_br" 00:19:36.508 05:59:57 -- nvmf/common.sh@154 -- # true 00:19:36.508 05:59:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:36.508 Cannot find device "nvmf_tgt_br2" 00:19:36.508 05:59:57 -- nvmf/common.sh@155 -- # true 00:19:36.508 05:59:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:36.508 05:59:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:36.508 Cannot find device "nvmf_tgt_br" 00:19:36.508 05:59:57 -- nvmf/common.sh@157 -- # true 00:19:36.509 05:59:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:36.509 Cannot find device "nvmf_tgt_br2" 00:19:36.509 05:59:57 -- nvmf/common.sh@158 -- # true 00:19:36.509 05:59:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:36.509 05:59:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:36.509 05:59:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:36.509 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:36.509 05:59:57 -- nvmf/common.sh@161 -- # true 00:19:36.509 05:59:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:36.509 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:36.509 05:59:57 -- nvmf/common.sh@162 -- # true 00:19:36.509 05:59:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:36.509 05:59:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:36.509 05:59:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:36.509 05:59:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:36.509 05:59:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:36.509 05:59:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:36.509 05:59:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:36.509 05:59:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:36.509 05:59:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:36.509 05:59:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:36.509 05:59:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:36.509 05:59:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:36.509 05:59:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:36.509 05:59:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:36.509 05:59:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:36.509 05:59:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:36.509 05:59:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:36.509 05:59:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:36.509 05:59:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:36.509 05:59:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:36.509 05:59:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:36.509 05:59:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:36.509 05:59:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:36.509 05:59:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:36.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:36.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:19:36.509 00:19:36.509 --- 10.0.0.2 ping statistics --- 00:19:36.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.509 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:19:36.509 05:59:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:36.509 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:36.509 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:19:36.509 00:19:36.509 --- 10.0.0.3 ping statistics --- 00:19:36.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.509 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:19:36.509 05:59:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:36.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:36.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:36.509 00:19:36.509 --- 10.0.0.1 ping statistics --- 00:19:36.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.509 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:36.509 05:59:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:36.509 05:59:57 -- nvmf/common.sh@421 -- # return 0 00:19:36.509 05:59:57 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:19:36.509 05:59:57 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:36.768 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:37.027 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:19:37.027 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:19:37.027 05:59:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.027 05:59:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:37.027 05:59:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:37.027 05:59:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.027 05:59:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:37.027 05:59:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:37.027 05:59:58 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:19:37.027 05:59:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:37.027 05:59:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:37.027 05:59:58 -- common/autotest_common.sh@10 -- # set +x 00:19:37.027 05:59:58 -- nvmf/common.sh@469 -- # nvmfpid=87404 00:19:37.027 05:59:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:19:37.027 05:59:58 -- nvmf/common.sh@470 -- # waitforlisten 87404 00:19:37.027 05:59:58 -- common/autotest_common.sh@829 -- # '[' -z 87404 ']' 00:19:37.027 05:59:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.027 05:59:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:37.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.027 05:59:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.027 05:59:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:37.027 05:59:58 -- common/autotest_common.sh@10 -- # set +x 00:19:37.027 [2024-12-15 05:59:58.623928] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:37.027 [2024-12-15 05:59:58.624045] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:37.286 [2024-12-15 05:59:58.766198] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:37.286 [2024-12-15 05:59:58.807307] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:37.286 [2024-12-15 05:59:58.807502] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:37.286 [2024-12-15 05:59:58.807517] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:37.286 [2024-12-15 05:59:58.807527] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:37.286 [2024-12-15 05:59:58.807743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.286 [2024-12-15 05:59:58.807973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.286 [2024-12-15 05:59:58.808025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:37.286 [2024-12-15 05:59:58.808394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.223 05:59:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:38.223 05:59:59 -- common/autotest_common.sh@862 -- # return 0 00:19:38.224 05:59:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:38.224 05:59:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:38.224 05:59:59 -- common/autotest_common.sh@10 -- # set +x 00:19:38.224 05:59:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:19:38.224 05:59:59 -- scripts/common.sh@311 -- # local bdf bdfs 00:19:38.224 05:59:59 -- scripts/common.sh@312 -- # local nvmes 00:19:38.224 05:59:59 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:19:38.224 05:59:59 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:19:38.224 05:59:59 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:19:38.224 05:59:59 -- scripts/common.sh@297 -- # local bdf= 00:19:38.224 05:59:59 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:19:38.224 05:59:59 -- scripts/common.sh@232 -- # local class 00:19:38.224 05:59:59 -- scripts/common.sh@233 -- # local subclass 00:19:38.224 05:59:59 -- scripts/common.sh@234 -- # local progif 00:19:38.224 05:59:59 -- scripts/common.sh@235 -- # printf %02x 1 00:19:38.224 05:59:59 -- scripts/common.sh@235 -- # class=01 00:19:38.224 05:59:59 -- scripts/common.sh@236 -- # printf %02x 8 00:19:38.224 05:59:59 -- scripts/common.sh@236 -- # subclass=08 00:19:38.224 05:59:59 -- scripts/common.sh@237 -- # printf %02x 2 00:19:38.224 05:59:59 -- scripts/common.sh@237 -- # progif=02 00:19:38.224 05:59:59 -- scripts/common.sh@239 -- # hash lspci 00:19:38.224 05:59:59 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:19:38.224 05:59:59 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:19:38.224 05:59:59 -- scripts/common.sh@242 -- # grep -i -- -p02 00:19:38.224 05:59:59 -- scripts/common.sh@244 -- # tr -d '"' 00:19:38.224 05:59:59 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:19:38.224 05:59:59 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:38.224 05:59:59 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:19:38.224 05:59:59 -- scripts/common.sh@15 -- # local i 00:19:38.224 05:59:59 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:19:38.224 05:59:59 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:38.224 05:59:59 -- scripts/common.sh@24 -- # return 0 00:19:38.224 05:59:59 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:19:38.224 05:59:59 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:38.224 05:59:59 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:19:38.224 05:59:59 -- scripts/common.sh@15 -- # local i 00:19:38.224 05:59:59 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:19:38.224 05:59:59 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:38.224 05:59:59 -- scripts/common.sh@24 -- # return 0 00:19:38.224 05:59:59 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:19:38.224 05:59:59 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:19:38.224 05:59:59 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:19:38.224 05:59:59 -- scripts/common.sh@322 -- # uname -s 00:19:38.224 05:59:59 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:19:38.224 05:59:59 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:19:38.224 05:59:59 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:19:38.224 05:59:59 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:19:38.224 05:59:59 -- scripts/common.sh@322 -- # uname -s 00:19:38.224 05:59:59 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:19:38.224 05:59:59 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:19:38.224 05:59:59 -- scripts/common.sh@327 -- # (( 2 )) 00:19:38.224 05:59:59 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:19:38.224 05:59:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:38.224 05:59:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:38.224 05:59:59 -- common/autotest_common.sh@10 -- # set +x 00:19:38.224 ************************************ 00:19:38.224 START TEST spdk_target_abort 00:19:38.224 ************************************ 00:19:38.224 05:59:59 -- common/autotest_common.sh@1114 -- # spdk_target 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:19:38.224 05:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.224 05:59:59 -- common/autotest_common.sh@10 -- # set +x 00:19:38.224 spdk_targetn1 00:19:38.224 05:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:38.224 05:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.224 05:59:59 -- common/autotest_common.sh@10 -- # set +x 00:19:38.224 [2024-12-15 05:59:59.734856] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.224 05:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:19:38.224 05:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.224 05:59:59 -- common/autotest_common.sh@10 -- # set +x 00:19:38.224 05:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:19:38.224 05:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.224 05:59:59 -- common/autotest_common.sh@10 -- # set +x 00:19:38.224 05:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:19:38.224 05:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.224 05:59:59 -- common/autotest_common.sh@10 -- # set +x 00:19:38.224 [2024-12-15 05:59:59.767056] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:38.224 05:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:38.224 05:59:59 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:41.510 Initializing NVMe Controllers 00:19:41.510 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:19:41.510 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:19:41.510 Initialization complete. Launching workers. 00:19:41.510 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10535, failed: 0 00:19:41.510 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1070, failed to submit 9465 00:19:41.510 success 827, unsuccess 243, failed 0 00:19:41.510 06:00:03 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:41.510 06:00:03 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:44.796 Initializing NVMe Controllers 00:19:44.796 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:19:44.797 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:19:44.797 Initialization complete. Launching workers. 00:19:44.797 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 9038, failed: 0 00:19:44.797 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1183, failed to submit 7855 00:19:44.797 success 401, unsuccess 782, failed 0 00:19:44.797 06:00:06 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:44.797 06:00:06 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:48.080 Initializing NVMe Controllers 00:19:48.080 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:19:48.080 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:19:48.080 Initialization complete. Launching workers. 00:19:48.080 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 32247, failed: 0 00:19:48.080 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2331, failed to submit 29916 00:19:48.080 success 417, unsuccess 1914, failed 0 00:19:48.080 06:00:09 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:19:48.080 06:00:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.080 06:00:09 -- common/autotest_common.sh@10 -- # set +x 00:19:48.080 06:00:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.080 06:00:09 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:19:48.080 06:00:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.080 06:00:09 -- common/autotest_common.sh@10 -- # set +x 00:19:48.338 06:00:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.338 06:00:09 -- target/abort_qd_sizes.sh@62 -- # killprocess 87404 00:19:48.338 06:00:09 -- common/autotest_common.sh@936 -- # '[' -z 87404 ']' 00:19:48.338 06:00:09 -- common/autotest_common.sh@940 -- # kill -0 87404 00:19:48.338 06:00:09 -- common/autotest_common.sh@941 -- # uname 00:19:48.338 06:00:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:48.338 06:00:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87404 00:19:48.338 06:00:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:48.338 killing process with pid 87404 00:19:48.338 06:00:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:48.339 06:00:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87404' 00:19:48.339 06:00:09 -- common/autotest_common.sh@955 -- # kill 87404 00:19:48.339 06:00:09 -- common/autotest_common.sh@960 -- # wait 87404 00:19:48.598 00:19:48.598 real 0m10.380s 00:19:48.598 user 0m42.590s 00:19:48.598 sys 0m1.920s 00:19:48.598 06:00:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:48.598 06:00:10 -- common/autotest_common.sh@10 -- # set +x 00:19:48.598 ************************************ 00:19:48.598 END TEST spdk_target_abort 00:19:48.598 ************************************ 00:19:48.598 06:00:10 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:19:48.598 06:00:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:48.598 06:00:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:48.598 06:00:10 -- common/autotest_common.sh@10 -- # set +x 00:19:48.598 ************************************ 00:19:48.598 START TEST kernel_target_abort 00:19:48.598 ************************************ 00:19:48.598 06:00:10 -- common/autotest_common.sh@1114 -- # kernel_target 00:19:48.598 06:00:10 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:19:48.598 06:00:10 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:19:48.598 06:00:10 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:19:48.598 06:00:10 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:19:48.598 06:00:10 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:19:48.598 06:00:10 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:19:48.598 06:00:10 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:48.598 06:00:10 -- nvmf/common.sh@627 -- # local block nvme 00:19:48.598 06:00:10 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:19:48.598 06:00:10 -- nvmf/common.sh@630 -- # modprobe nvmet 00:19:48.598 06:00:10 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:48.598 06:00:10 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:48.857 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:49.116 Waiting for block devices as requested 00:19:49.116 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:19:49.116 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:19:49.116 06:00:10 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:19:49.116 06:00:10 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:49.116 06:00:10 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:19:49.116 06:00:10 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:19:49.116 06:00:10 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:49.116 No valid GPT data, bailing 00:19:49.116 06:00:10 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:49.375 06:00:10 -- scripts/common.sh@393 -- # pt= 00:19:49.375 06:00:10 -- scripts/common.sh@394 -- # return 1 00:19:49.375 06:00:10 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:19:49.375 06:00:10 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:19:49.375 06:00:10 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:49.375 06:00:10 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:19:49.375 06:00:10 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:19:49.375 06:00:10 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:49.375 No valid GPT data, bailing 00:19:49.375 06:00:10 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:49.375 06:00:10 -- scripts/common.sh@393 -- # pt= 00:19:49.375 06:00:10 -- scripts/common.sh@394 -- # return 1 00:19:49.375 06:00:10 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:19:49.375 06:00:10 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:19:49.375 06:00:10 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:19:49.375 06:00:10 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:19:49.375 06:00:10 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:19:49.375 06:00:10 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:19:49.375 No valid GPT data, bailing 00:19:49.375 06:00:10 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:19:49.375 06:00:10 -- scripts/common.sh@393 -- # pt= 00:19:49.375 06:00:10 -- scripts/common.sh@394 -- # return 1 00:19:49.375 06:00:10 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:19:49.375 06:00:10 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:19:49.376 06:00:10 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:19:49.376 06:00:10 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:19:49.376 06:00:10 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:19:49.376 06:00:10 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:19:49.376 No valid GPT data, bailing 00:19:49.376 06:00:10 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:19:49.376 06:00:10 -- scripts/common.sh@393 -- # pt= 00:19:49.376 06:00:10 -- scripts/common.sh@394 -- # return 1 00:19:49.376 06:00:10 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:19:49.376 06:00:10 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:19:49.376 06:00:10 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:19:49.376 06:00:10 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:19:49.376 06:00:10 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:49.376 06:00:10 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:19:49.376 06:00:10 -- nvmf/common.sh@654 -- # echo 1 00:19:49.376 06:00:10 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:19:49.376 06:00:10 -- nvmf/common.sh@656 -- # echo 1 00:19:49.376 06:00:10 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:19:49.376 06:00:10 -- nvmf/common.sh@663 -- # echo tcp 00:19:49.376 06:00:10 -- nvmf/common.sh@664 -- # echo 4420 00:19:49.376 06:00:10 -- nvmf/common.sh@665 -- # echo ipv4 00:19:49.376 06:00:10 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:49.376 06:00:10 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:926ec8f8-6baf-4857-8f2f-72d8639146f9 --hostid=926ec8f8-6baf-4857-8f2f-72d8639146f9 -a 10.0.0.1 -t tcp -s 4420 00:19:49.376 00:19:49.376 Discovery Log Number of Records 2, Generation counter 2 00:19:49.376 =====Discovery Log Entry 0====== 00:19:49.376 trtype: tcp 00:19:49.376 adrfam: ipv4 00:19:49.376 subtype: current discovery subsystem 00:19:49.376 treq: not specified, sq flow control disable supported 00:19:49.376 portid: 1 00:19:49.376 trsvcid: 4420 00:19:49.376 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:49.376 traddr: 10.0.0.1 00:19:49.376 eflags: none 00:19:49.376 sectype: none 00:19:49.376 =====Discovery Log Entry 1====== 00:19:49.376 trtype: tcp 00:19:49.376 adrfam: ipv4 00:19:49.376 subtype: nvme subsystem 00:19:49.376 treq: not specified, sq flow control disable supported 00:19:49.376 portid: 1 00:19:49.376 trsvcid: 4420 00:19:49.376 subnqn: kernel_target 00:19:49.376 traddr: 10.0.0.1 00:19:49.376 eflags: none 00:19:49.376 sectype: none 00:19:49.376 06:00:11 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:19:49.635 06:00:11 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:49.635 06:00:11 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:49.635 06:00:11 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:19:49.635 06:00:11 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:49.635 06:00:11 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:19:49.635 06:00:11 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:49.635 06:00:11 -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:49.635 06:00:11 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:49.635 06:00:11 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:49.635 06:00:11 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:49.635 06:00:11 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:49.635 06:00:11 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:49.635 06:00:11 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:49.635 06:00:11 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:19:49.635 06:00:11 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:49.635 06:00:11 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:19:49.635 06:00:11 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:49.635 06:00:11 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:19:49.635 06:00:11 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:49.635 06:00:11 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:19:52.919 Initializing NVMe Controllers 00:19:52.919 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:19:52.920 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:19:52.920 Initialization complete. Launching workers. 00:19:52.920 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 30023, failed: 0 00:19:52.920 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 30023, failed to submit 0 00:19:52.920 success 0, unsuccess 30023, failed 0 00:19:52.920 06:00:14 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:52.920 06:00:14 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:19:56.206 Initializing NVMe Controllers 00:19:56.206 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:19:56.206 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:19:56.206 Initialization complete. Launching workers. 00:19:56.206 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 62409, failed: 0 00:19:56.206 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 26161, failed to submit 36248 00:19:56.206 success 0, unsuccess 26161, failed 0 00:19:56.206 06:00:17 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:56.206 06:00:17 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:19:59.494 Initializing NVMe Controllers 00:19:59.494 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:19:59.494 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:19:59.494 Initialization complete. Launching workers. 00:19:59.494 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 72891, failed: 0 00:19:59.494 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 18194, failed to submit 54697 00:19:59.494 success 0, unsuccess 18194, failed 0 00:19:59.494 06:00:20 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:19:59.494 06:00:20 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:19:59.494 06:00:20 -- nvmf/common.sh@677 -- # echo 0 00:19:59.494 06:00:20 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:19:59.494 06:00:20 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:19:59.494 06:00:20 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:59.494 06:00:20 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:19:59.494 06:00:20 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:19:59.494 06:00:20 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:19:59.494 00:19:59.494 real 0m10.526s 00:19:59.494 user 0m5.439s 00:19:59.494 sys 0m2.494s 00:19:59.494 06:00:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:59.494 06:00:20 -- common/autotest_common.sh@10 -- # set +x 00:19:59.494 ************************************ 00:19:59.494 END TEST kernel_target_abort 00:19:59.494 ************************************ 00:19:59.494 06:00:20 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:19:59.494 06:00:20 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:19:59.494 06:00:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:59.494 06:00:20 -- nvmf/common.sh@116 -- # sync 00:19:59.494 06:00:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:59.494 06:00:20 -- nvmf/common.sh@119 -- # set +e 00:19:59.494 06:00:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:59.494 06:00:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:59.494 rmmod nvme_tcp 00:19:59.494 rmmod nvme_fabrics 00:19:59.494 rmmod nvme_keyring 00:19:59.494 06:00:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:59.494 06:00:20 -- nvmf/common.sh@123 -- # set -e 00:19:59.494 06:00:20 -- nvmf/common.sh@124 -- # return 0 00:19:59.494 06:00:20 -- nvmf/common.sh@477 -- # '[' -n 87404 ']' 00:19:59.494 06:00:20 -- nvmf/common.sh@478 -- # killprocess 87404 00:19:59.494 06:00:20 -- common/autotest_common.sh@936 -- # '[' -z 87404 ']' 00:19:59.494 06:00:20 -- common/autotest_common.sh@940 -- # kill -0 87404 00:19:59.494 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (87404) - No such process 00:19:59.494 Process with pid 87404 is not found 00:19:59.494 06:00:20 -- common/autotest_common.sh@963 -- # echo 'Process with pid 87404 is not found' 00:19:59.494 06:00:20 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:19:59.494 06:00:20 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:00.062 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:00.062 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:20:00.062 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:20:00.062 06:00:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:00.062 06:00:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:00.062 06:00:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:00.062 06:00:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:00.062 06:00:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.062 06:00:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:00.062 06:00:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.062 06:00:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:00.062 00:20:00.062 real 0m24.418s 00:20:00.062 user 0m49.429s 00:20:00.062 sys 0m5.742s 00:20:00.062 06:00:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:00.062 06:00:21 -- common/autotest_common.sh@10 -- # set +x 00:20:00.062 ************************************ 00:20:00.062 END TEST nvmf_abort_qd_sizes 00:20:00.062 ************************************ 00:20:00.062 06:00:21 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:20:00.062 06:00:21 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:20:00.062 06:00:21 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:20:00.062 06:00:21 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:00.062 06:00:21 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:20:00.062 06:00:21 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:20:00.062 06:00:21 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:20:00.062 06:00:21 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:00.062 06:00:21 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:20:00.062 06:00:21 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:00.062 06:00:21 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:00.062 06:00:21 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:20:00.062 06:00:21 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:20:00.062 06:00:21 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:20:00.062 06:00:21 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:20:00.062 06:00:21 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:20:00.062 06:00:21 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:20:00.062 06:00:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:00.062 06:00:21 -- common/autotest_common.sh@10 -- # set +x 00:20:00.062 06:00:21 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:20:00.062 06:00:21 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:20:00.062 06:00:21 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:20:00.062 06:00:21 -- common/autotest_common.sh@10 -- # set +x 00:20:01.965 INFO: APP EXITING 00:20:01.965 INFO: killing all VMs 00:20:01.965 INFO: killing vhost app 00:20:01.965 INFO: EXIT DONE 00:20:02.533 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:02.533 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:20:02.533 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:20:03.102 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:03.102 Cleaning 00:20:03.102 Removing: /var/run/dpdk/spdk0/config 00:20:03.102 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:03.102 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:03.102 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:03.102 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:03.102 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:03.102 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:03.102 Removing: /var/run/dpdk/spdk1/config 00:20:03.102 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:20:03.102 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:20:03.102 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:20:03.102 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:20:03.102 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:20:03.102 Removing: /var/run/dpdk/spdk1/hugepage_info 00:20:03.102 Removing: /var/run/dpdk/spdk2/config 00:20:03.102 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:20:03.102 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:20:03.102 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:20:03.102 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:20:03.102 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:20:03.102 Removing: /var/run/dpdk/spdk2/hugepage_info 00:20:03.102 Removing: /var/run/dpdk/spdk3/config 00:20:03.102 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:20:03.102 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:20:03.102 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:20:03.361 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:20:03.361 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:20:03.361 Removing: /var/run/dpdk/spdk3/hugepage_info 00:20:03.361 Removing: /var/run/dpdk/spdk4/config 00:20:03.361 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:20:03.361 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:20:03.361 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:20:03.361 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:20:03.361 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:20:03.361 Removing: /var/run/dpdk/spdk4/hugepage_info 00:20:03.361 Removing: /dev/shm/nvmf_trace.0 00:20:03.361 Removing: /dev/shm/spdk_tgt_trace.pid65570 00:20:03.361 Removing: /var/run/dpdk/spdk0 00:20:03.361 Removing: /var/run/dpdk/spdk1 00:20:03.361 Removing: /var/run/dpdk/spdk2 00:20:03.361 Removing: /var/run/dpdk/spdk3 00:20:03.361 Removing: /var/run/dpdk/spdk4 00:20:03.361 Removing: /var/run/dpdk/spdk_pid65429 00:20:03.361 Removing: /var/run/dpdk/spdk_pid65570 00:20:03.361 Removing: /var/run/dpdk/spdk_pid65823 00:20:03.361 Removing: /var/run/dpdk/spdk_pid66019 00:20:03.361 Removing: /var/run/dpdk/spdk_pid66161 00:20:03.361 Removing: /var/run/dpdk/spdk_pid66238 00:20:03.361 Removing: /var/run/dpdk/spdk_pid66322 00:20:03.361 Removing: /var/run/dpdk/spdk_pid66420 00:20:03.361 Removing: /var/run/dpdk/spdk_pid66493 00:20:03.361 Removing: /var/run/dpdk/spdk_pid66526 00:20:03.361 Removing: /var/run/dpdk/spdk_pid66567 00:20:03.361 Removing: /var/run/dpdk/spdk_pid66630 00:20:03.361 Removing: /var/run/dpdk/spdk_pid66711 00:20:03.361 Removing: /var/run/dpdk/spdk_pid67143 00:20:03.361 Removing: /var/run/dpdk/spdk_pid67190 00:20:03.361 Removing: /var/run/dpdk/spdk_pid67235 00:20:03.361 Removing: /var/run/dpdk/spdk_pid67251 00:20:03.361 Removing: /var/run/dpdk/spdk_pid67313 00:20:03.361 Removing: /var/run/dpdk/spdk_pid67329 00:20:03.361 Removing: /var/run/dpdk/spdk_pid67390 00:20:03.361 Removing: /var/run/dpdk/spdk_pid67406 00:20:03.361 Removing: /var/run/dpdk/spdk_pid67452 00:20:03.361 Removing: /var/run/dpdk/spdk_pid67470 00:20:03.361 Removing: /var/run/dpdk/spdk_pid67510 00:20:03.361 Removing: /var/run/dpdk/spdk_pid67528 00:20:03.361 Removing: /var/run/dpdk/spdk_pid67652 00:20:03.361 Removing: /var/run/dpdk/spdk_pid67687 00:20:03.361 Removing: /var/run/dpdk/spdk_pid67769 00:20:03.361 Removing: /var/run/dpdk/spdk_pid67815 00:20:03.361 Removing: /var/run/dpdk/spdk_pid67845 00:20:03.361 Removing: /var/run/dpdk/spdk_pid67898 00:20:03.361 Removing: /var/run/dpdk/spdk_pid67916 00:20:03.361 Removing: /var/run/dpdk/spdk_pid67952 00:20:03.361 Removing: /var/run/dpdk/spdk_pid67966 00:20:03.361 Removing: /var/run/dpdk/spdk_pid67995 00:20:03.361 Removing: /var/run/dpdk/spdk_pid68022 00:20:03.361 Removing: /var/run/dpdk/spdk_pid68051 00:20:03.361 Removing: /var/run/dpdk/spdk_pid68065 00:20:03.361 Removing: /var/run/dpdk/spdk_pid68100 00:20:03.361 Removing: /var/run/dpdk/spdk_pid68121 00:20:03.361 Removing: /var/run/dpdk/spdk_pid68150 00:20:03.361 Removing: /var/run/dpdk/spdk_pid68165 00:20:03.361 Removing: /var/run/dpdk/spdk_pid68204 00:20:03.362 Removing: /var/run/dpdk/spdk_pid68218 00:20:03.362 Removing: /var/run/dpdk/spdk_pid68247 00:20:03.362 Removing: /var/run/dpdk/spdk_pid68269 00:20:03.362 Removing: /var/run/dpdk/spdk_pid68303 00:20:03.362 Removing: /var/run/dpdk/spdk_pid68317 00:20:03.362 Removing: /var/run/dpdk/spdk_pid68353 00:20:03.362 Removing: /var/run/dpdk/spdk_pid68367 00:20:03.362 Removing: /var/run/dpdk/spdk_pid68401 00:20:03.362 Removing: /var/run/dpdk/spdk_pid68421 00:20:03.362 Removing: /var/run/dpdk/spdk_pid68450 00:20:03.362 Removing: /var/run/dpdk/spdk_pid68469 00:20:03.362 Removing: /var/run/dpdk/spdk_pid68504 00:20:03.362 Removing: /var/run/dpdk/spdk_pid68518 00:20:03.362 Removing: /var/run/dpdk/spdk_pid68547 00:20:03.362 Removing: /var/run/dpdk/spdk_pid68566 00:20:03.362 Removing: /var/run/dpdk/spdk_pid68601 00:20:03.362 Removing: /var/run/dpdk/spdk_pid68615 00:20:03.362 Removing: /var/run/dpdk/spdk_pid68646 00:20:03.362 Removing: /var/run/dpdk/spdk_pid68669 00:20:03.362 Removing: /var/run/dpdk/spdk_pid68698 00:20:03.362 Removing: /var/run/dpdk/spdk_pid68715 00:20:03.362 Removing: /var/run/dpdk/spdk_pid68758 00:20:03.362 Removing: /var/run/dpdk/spdk_pid68775 00:20:03.362 Removing: /var/run/dpdk/spdk_pid68807 00:20:03.362 Removing: /var/run/dpdk/spdk_pid68827 00:20:03.620 Removing: /var/run/dpdk/spdk_pid68861 00:20:03.620 Removing: /var/run/dpdk/spdk_pid68875 00:20:03.620 Removing: /var/run/dpdk/spdk_pid68911 00:20:03.620 Removing: /var/run/dpdk/spdk_pid68982 00:20:03.620 Removing: /var/run/dpdk/spdk_pid69069 00:20:03.620 Removing: /var/run/dpdk/spdk_pid69401 00:20:03.620 Removing: /var/run/dpdk/spdk_pid69413 00:20:03.620 Removing: /var/run/dpdk/spdk_pid69448 00:20:03.620 Removing: /var/run/dpdk/spdk_pid69457 00:20:03.620 Removing: /var/run/dpdk/spdk_pid69470 00:20:03.620 Removing: /var/run/dpdk/spdk_pid69488 00:20:03.620 Removing: /var/run/dpdk/spdk_pid69501 00:20:03.620 Removing: /var/run/dpdk/spdk_pid69514 00:20:03.620 Removing: /var/run/dpdk/spdk_pid69532 00:20:03.620 Removing: /var/run/dpdk/spdk_pid69545 00:20:03.620 Removing: /var/run/dpdk/spdk_pid69557 00:20:03.620 Removing: /var/run/dpdk/spdk_pid69571 00:20:03.620 Removing: /var/run/dpdk/spdk_pid69589 00:20:03.621 Removing: /var/run/dpdk/spdk_pid69597 00:20:03.621 Removing: /var/run/dpdk/spdk_pid69615 00:20:03.621 Removing: /var/run/dpdk/spdk_pid69627 00:20:03.621 Removing: /var/run/dpdk/spdk_pid69641 00:20:03.621 Removing: /var/run/dpdk/spdk_pid69659 00:20:03.621 Removing: /var/run/dpdk/spdk_pid69666 00:20:03.621 Removing: /var/run/dpdk/spdk_pid69685 00:20:03.621 Removing: /var/run/dpdk/spdk_pid69709 00:20:03.621 Removing: /var/run/dpdk/spdk_pid69727 00:20:03.621 Removing: /var/run/dpdk/spdk_pid69749 00:20:03.621 Removing: /var/run/dpdk/spdk_pid69819 00:20:03.621 Removing: /var/run/dpdk/spdk_pid69840 00:20:03.621 Removing: /var/run/dpdk/spdk_pid69855 00:20:03.621 Removing: /var/run/dpdk/spdk_pid69878 00:20:03.621 Removing: /var/run/dpdk/spdk_pid69893 00:20:03.621 Removing: /var/run/dpdk/spdk_pid69895 00:20:03.621 Removing: /var/run/dpdk/spdk_pid69930 00:20:03.621 Removing: /var/run/dpdk/spdk_pid69947 00:20:03.621 Removing: /var/run/dpdk/spdk_pid69968 00:20:03.621 Removing: /var/run/dpdk/spdk_pid69981 00:20:03.621 Removing: /var/run/dpdk/spdk_pid69983 00:20:03.621 Removing: /var/run/dpdk/spdk_pid69985 00:20:03.621 Removing: /var/run/dpdk/spdk_pid69998 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70000 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70008 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70015 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70036 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70068 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70072 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70106 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70110 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70118 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70158 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70164 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70196 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70198 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70206 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70213 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70221 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70228 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70230 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70238 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70319 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70355 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70467 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70497 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70537 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70557 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70566 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70586 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70617 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70632 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70708 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70722 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70754 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70826 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70876 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70901 00:20:03.621 Removing: /var/run/dpdk/spdk_pid70999 00:20:03.621 Removing: /var/run/dpdk/spdk_pid71039 00:20:03.621 Removing: /var/run/dpdk/spdk_pid71071 00:20:03.621 Removing: /var/run/dpdk/spdk_pid71294 00:20:03.621 Removing: /var/run/dpdk/spdk_pid71386 00:20:03.621 Removing: /var/run/dpdk/spdk_pid71414 00:20:03.621 Removing: /var/run/dpdk/spdk_pid71730 00:20:03.621 Removing: /var/run/dpdk/spdk_pid71774 00:20:03.621 Removing: /var/run/dpdk/spdk_pid72084 00:20:03.880 Removing: /var/run/dpdk/spdk_pid72489 00:20:03.880 Removing: /var/run/dpdk/spdk_pid72758 00:20:03.880 Removing: /var/run/dpdk/spdk_pid73509 00:20:03.880 Removing: /var/run/dpdk/spdk_pid74338 00:20:03.880 Removing: /var/run/dpdk/spdk_pid74450 00:20:03.880 Removing: /var/run/dpdk/spdk_pid74518 00:20:03.880 Removing: /var/run/dpdk/spdk_pid75775 00:20:03.880 Removing: /var/run/dpdk/spdk_pid75998 00:20:03.880 Removing: /var/run/dpdk/spdk_pid76305 00:20:03.880 Removing: /var/run/dpdk/spdk_pid76418 00:20:03.880 Removing: /var/run/dpdk/spdk_pid76544 00:20:03.880 Removing: /var/run/dpdk/spdk_pid76567 00:20:03.880 Removing: /var/run/dpdk/spdk_pid76595 00:20:03.880 Removing: /var/run/dpdk/spdk_pid76617 00:20:03.880 Removing: /var/run/dpdk/spdk_pid76714 00:20:03.880 Removing: /var/run/dpdk/spdk_pid76850 00:20:03.880 Removing: /var/run/dpdk/spdk_pid77002 00:20:03.880 Removing: /var/run/dpdk/spdk_pid77084 00:20:03.880 Removing: /var/run/dpdk/spdk_pid77477 00:20:03.880 Removing: /var/run/dpdk/spdk_pid77825 00:20:03.880 Removing: /var/run/dpdk/spdk_pid77831 00:20:03.880 Removing: /var/run/dpdk/spdk_pid80039 00:20:03.880 Removing: /var/run/dpdk/spdk_pid80041 00:20:03.880 Removing: /var/run/dpdk/spdk_pid80310 00:20:03.880 Removing: /var/run/dpdk/spdk_pid80330 00:20:03.880 Removing: /var/run/dpdk/spdk_pid80349 00:20:03.880 Removing: /var/run/dpdk/spdk_pid80374 00:20:03.880 Removing: /var/run/dpdk/spdk_pid80385 00:20:03.880 Removing: /var/run/dpdk/spdk_pid80473 00:20:03.880 Removing: /var/run/dpdk/spdk_pid80475 00:20:03.880 Removing: /var/run/dpdk/spdk_pid80583 00:20:03.880 Removing: /var/run/dpdk/spdk_pid80596 00:20:03.880 Removing: /var/run/dpdk/spdk_pid80704 00:20:03.880 Removing: /var/run/dpdk/spdk_pid80706 00:20:03.880 Removing: /var/run/dpdk/spdk_pid81104 00:20:03.880 Removing: /var/run/dpdk/spdk_pid81153 00:20:03.880 Removing: /var/run/dpdk/spdk_pid81256 00:20:03.880 Removing: /var/run/dpdk/spdk_pid81341 00:20:03.880 Removing: /var/run/dpdk/spdk_pid81669 00:20:03.880 Removing: /var/run/dpdk/spdk_pid81871 00:20:03.880 Removing: /var/run/dpdk/spdk_pid82257 00:20:03.880 Removing: /var/run/dpdk/spdk_pid82785 00:20:03.880 Removing: /var/run/dpdk/spdk_pid83239 00:20:03.880 Removing: /var/run/dpdk/spdk_pid83287 00:20:03.880 Removing: /var/run/dpdk/spdk_pid83334 00:20:03.880 Removing: /var/run/dpdk/spdk_pid83382 00:20:03.880 Removing: /var/run/dpdk/spdk_pid83485 00:20:03.880 Removing: /var/run/dpdk/spdk_pid83534 00:20:03.880 Removing: /var/run/dpdk/spdk_pid83595 00:20:03.880 Removing: /var/run/dpdk/spdk_pid83655 00:20:03.880 Removing: /var/run/dpdk/spdk_pid83970 00:20:03.880 Removing: /var/run/dpdk/spdk_pid85140 00:20:03.880 Removing: /var/run/dpdk/spdk_pid85286 00:20:03.880 Removing: /var/run/dpdk/spdk_pid85530 00:20:03.880 Removing: /var/run/dpdk/spdk_pid86094 00:20:03.880 Removing: /var/run/dpdk/spdk_pid86253 00:20:03.880 Removing: /var/run/dpdk/spdk_pid86410 00:20:03.880 Removing: /var/run/dpdk/spdk_pid86507 00:20:03.880 Removing: /var/run/dpdk/spdk_pid86683 00:20:03.880 Removing: /var/run/dpdk/spdk_pid86792 00:20:03.880 Removing: /var/run/dpdk/spdk_pid87455 00:20:03.880 Removing: /var/run/dpdk/spdk_pid87490 00:20:03.880 Removing: /var/run/dpdk/spdk_pid87525 00:20:03.880 Removing: /var/run/dpdk/spdk_pid87773 00:20:03.880 Removing: /var/run/dpdk/spdk_pid87805 00:20:03.880 Removing: /var/run/dpdk/spdk_pid87840 00:20:03.880 Clean 00:20:04.138 killing process with pid 59796 00:20:04.138 killing process with pid 59799 00:20:04.138 06:00:25 -- common/autotest_common.sh@1446 -- # return 0 00:20:04.138 06:00:25 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:20:04.138 06:00:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:04.138 06:00:25 -- common/autotest_common.sh@10 -- # set +x 00:20:04.138 06:00:25 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:20:04.138 06:00:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:04.138 06:00:25 -- common/autotest_common.sh@10 -- # set +x 00:20:04.138 06:00:25 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:04.138 06:00:25 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:04.138 06:00:25 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:04.138 06:00:25 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:20:04.138 06:00:25 -- spdk/autotest.sh@383 -- # hostname 00:20:04.138 06:00:25 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:04.396 geninfo: WARNING: invalid characters removed from testname! 00:20:26.367 06:00:47 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:30.554 06:00:51 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:32.456 06:00:53 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:34.988 06:00:56 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:37.521 06:00:58 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:40.054 06:01:01 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:41.975 06:01:03 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:41.975 06:01:03 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:20:41.975 06:01:03 -- common/autotest_common.sh@1690 -- $ lcov --version 00:20:41.975 06:01:03 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:20:42.234 06:01:03 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:20:42.234 06:01:03 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:20:42.234 06:01:03 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:20:42.234 06:01:03 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:20:42.234 06:01:03 -- scripts/common.sh@335 -- $ IFS=.-: 00:20:42.234 06:01:03 -- scripts/common.sh@335 -- $ read -ra ver1 00:20:42.234 06:01:03 -- scripts/common.sh@336 -- $ IFS=.-: 00:20:42.234 06:01:03 -- scripts/common.sh@336 -- $ read -ra ver2 00:20:42.234 06:01:03 -- scripts/common.sh@337 -- $ local 'op=<' 00:20:42.234 06:01:03 -- scripts/common.sh@339 -- $ ver1_l=2 00:20:42.234 06:01:03 -- scripts/common.sh@340 -- $ ver2_l=1 00:20:42.234 06:01:03 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:20:42.234 06:01:03 -- scripts/common.sh@343 -- $ case "$op" in 00:20:42.234 06:01:03 -- scripts/common.sh@344 -- $ : 1 00:20:42.234 06:01:03 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:20:42.234 06:01:03 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.234 06:01:03 -- scripts/common.sh@364 -- $ decimal 1 00:20:42.234 06:01:03 -- scripts/common.sh@352 -- $ local d=1 00:20:42.234 06:01:03 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:20:42.234 06:01:03 -- scripts/common.sh@354 -- $ echo 1 00:20:42.234 06:01:03 -- scripts/common.sh@364 -- $ ver1[v]=1 00:20:42.234 06:01:03 -- scripts/common.sh@365 -- $ decimal 2 00:20:42.234 06:01:03 -- scripts/common.sh@352 -- $ local d=2 00:20:42.234 06:01:03 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:20:42.234 06:01:03 -- scripts/common.sh@354 -- $ echo 2 00:20:42.234 06:01:03 -- scripts/common.sh@365 -- $ ver2[v]=2 00:20:42.234 06:01:03 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:20:42.234 06:01:03 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:20:42.234 06:01:03 -- scripts/common.sh@367 -- $ return 0 00:20:42.234 06:01:03 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.234 06:01:03 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:20:42.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.234 --rc genhtml_branch_coverage=1 00:20:42.234 --rc genhtml_function_coverage=1 00:20:42.234 --rc genhtml_legend=1 00:20:42.234 --rc geninfo_all_blocks=1 00:20:42.234 --rc geninfo_unexecuted_blocks=1 00:20:42.234 00:20:42.234 ' 00:20:42.234 06:01:03 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:20:42.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.235 --rc genhtml_branch_coverage=1 00:20:42.235 --rc genhtml_function_coverage=1 00:20:42.235 --rc genhtml_legend=1 00:20:42.235 --rc geninfo_all_blocks=1 00:20:42.235 --rc geninfo_unexecuted_blocks=1 00:20:42.235 00:20:42.235 ' 00:20:42.235 06:01:03 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:20:42.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.235 --rc genhtml_branch_coverage=1 00:20:42.235 --rc genhtml_function_coverage=1 00:20:42.235 --rc genhtml_legend=1 00:20:42.235 --rc geninfo_all_blocks=1 00:20:42.235 --rc geninfo_unexecuted_blocks=1 00:20:42.235 00:20:42.235 ' 00:20:42.235 06:01:03 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:20:42.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.235 --rc genhtml_branch_coverage=1 00:20:42.235 --rc genhtml_function_coverage=1 00:20:42.235 --rc genhtml_legend=1 00:20:42.235 --rc geninfo_all_blocks=1 00:20:42.235 --rc geninfo_unexecuted_blocks=1 00:20:42.235 00:20:42.235 ' 00:20:42.235 06:01:03 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:42.235 06:01:03 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:20:42.235 06:01:03 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.235 06:01:03 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.235 06:01:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.235 06:01:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.235 06:01:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.235 06:01:03 -- paths/export.sh@5 -- $ export PATH 00:20:42.235 06:01:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.235 06:01:03 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:20:42.235 06:01:03 -- common/autobuild_common.sh@440 -- $ date +%s 00:20:42.235 06:01:03 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734242463.XXXXXX 00:20:42.235 06:01:03 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734242463.YfPFZk 00:20:42.235 06:01:03 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:20:42.235 06:01:03 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:20:42.235 06:01:03 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:20:42.235 06:01:03 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:20:42.235 06:01:03 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:20:42.235 06:01:03 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:20:42.235 06:01:03 -- common/autobuild_common.sh@456 -- $ get_config_params 00:20:42.235 06:01:03 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:20:42.235 06:01:03 -- common/autotest_common.sh@10 -- $ set +x 00:20:42.235 06:01:03 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:20:42.235 06:01:03 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:20:42.235 06:01:03 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:20:42.235 06:01:03 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:20:42.235 06:01:03 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:20:42.235 06:01:03 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:20:42.235 06:01:03 -- spdk/autopackage.sh@19 -- $ timing_finish 00:20:42.235 06:01:03 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:42.235 06:01:03 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:20:42.235 06:01:03 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:42.235 06:01:03 -- spdk/autopackage.sh@20 -- $ exit 0 00:20:42.235 + [[ -n 5973 ]] 00:20:42.235 + sudo kill 5973 00:20:42.244 [Pipeline] } 00:20:42.259 [Pipeline] // timeout 00:20:42.264 [Pipeline] } 00:20:42.277 [Pipeline] // stage 00:20:42.281 [Pipeline] } 00:20:42.295 [Pipeline] // catchError 00:20:42.303 [Pipeline] stage 00:20:42.305 [Pipeline] { (Stop VM) 00:20:42.319 [Pipeline] sh 00:20:42.600 + vagrant halt 00:20:46.783 ==> default: Halting domain... 00:20:52.061 [Pipeline] sh 00:20:52.351 + vagrant destroy -f 00:20:55.631 ==> default: Removing domain... 00:20:55.642 [Pipeline] sh 00:20:55.921 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:20:55.929 [Pipeline] } 00:20:55.944 [Pipeline] // stage 00:20:55.948 [Pipeline] } 00:20:55.962 [Pipeline] // dir 00:20:55.967 [Pipeline] } 00:20:55.980 [Pipeline] // wrap 00:20:55.985 [Pipeline] } 00:20:55.998 [Pipeline] // catchError 00:20:56.007 [Pipeline] stage 00:20:56.009 [Pipeline] { (Epilogue) 00:20:56.021 [Pipeline] sh 00:20:56.301 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:01.580 [Pipeline] catchError 00:21:01.582 [Pipeline] { 00:21:01.594 [Pipeline] sh 00:21:01.873 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:02.132 Artifacts sizes are good 00:21:02.140 [Pipeline] } 00:21:02.154 [Pipeline] // catchError 00:21:02.163 [Pipeline] archiveArtifacts 00:21:02.169 Archiving artifacts 00:21:02.309 [Pipeline] cleanWs 00:21:02.320 [WS-CLEANUP] Deleting project workspace... 00:21:02.320 [WS-CLEANUP] Deferred wipeout is used... 00:21:02.325 [WS-CLEANUP] done 00:21:02.327 [Pipeline] } 00:21:02.341 [Pipeline] // stage 00:21:02.346 [Pipeline] } 00:21:02.356 [Pipeline] // node 00:21:02.361 [Pipeline] End of Pipeline 00:21:02.388 Finished: SUCCESS